Please use this identifier to cite or link to this item: https://doi.org/10.1145/2501643.2501650
Title: Towards decrypting attractiveness via multi-modality cues
Authors: Nguyen, T.V.
Liu, S. 
Ni, B.
Tan, J.
Rui, Y.
Yan, S.
Keywords: Dressing
Face
Latent attributes
Voice attractiveness
Issue Date: 2013
Citation: Nguyen, T.V., Liu, S., Ni, B., Tan, J., Rui, Y., Yan, S. (2013). Towards decrypting attractiveness via multi-modality cues. ACM Transactions on Multimedia Computing, Communications and Applications 9 (4) : -. ScholarBank@NUS Repository. https://doi.org/10.1145/2501643.2501650
Abstract: Decrypting the secret of beauty or attractiveness has been the pursuit of artists and philosophers for centuries. To date, the computational model for attractiveness estimation has been actively explored in computer vision and multimedia community, yet with the focus mainly on facial features. In this article, we conduct a comprehensive study on female attractiveness conveyed y single/multiplemodalities of cues, that is, face, dressing and/or voice, and aim to discover how different modalities individually and collectively affect the human sense of beauty. To extensively investigate the problem, we collect the Multi-Modality Beauty (M2B) dataset, which is annotated with attractiveness levels converted from manual k-wise ratings and semantic attributes of different modalities. Inspired by the common consensus that middle-level attribute prediction can assist higher-level computer vision tasks, we manually labeled many attributes for each modality. Next, a tri-layer Dual-supervised Feature-Attribute-Task (DFAT) network is proposed to jointly learn the attribute model and attractiveness model of single/multiple modalities. To remedy possible loss of information caused by incomplete manual attributes, we also propose a novel Latent Dual-supervised Feature-Attribute- Task (LDFAT) network, where latent attributes are combined with manual attributes to contribute to the final attractiveness estimation. The extensive experimental evaluations on the collected M2B dataset well demonstrate the effectiveness of the proposed DFAT and LDFAT networks for female attractiveness prediction.. © 2014 ACM.
Source Title: ACM Transactions on Multimedia Computing, Communications and Applications
URI: http://scholarbank.nus.edu.sg/handle/10635/77935
ISSN: 15516865
DOI: 10.1145/2501643.2501650
Appears in Collections:Staff Publications

Show full item record
Files in This Item:
There are no files associated with this item.

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.