Please use this identifier to cite or link to this item:
|Title:||Predicting occupation via human clothing and contexts|
|Source:||Song, Z.,Wang, M.,Hua, X.-S.,Yan, S. (2011). Predicting occupation via human clothing and contexts. Proceedings of the IEEE International Conference on Computer Vision : 1084-1091. ScholarBank@NUS Repository. https://doi.org/10.1109/ICCV.2011.6126355|
|Abstract:||Predicting human occupations in photos has great application potentials in intelligent services and systems. However, using traditional classification methods cannot reliably distinguish different occupations due to the complex relations between occupations and the low-level image features. In this paper, we investigate the human occupation prediction problem by modeling the appearances of human clothing as well as surrounding context. The human clothing, regarding its complex details and variant appearances, is described via part-based modeling on the automatically aligned patches of human body parts. The image patches are represented with semantic-level patterns such as clothes and haircut styles using methods based on sparse coding towards informative and noise-tolerant capacities. This description of human clothing is proved to be more effective than traditional methods. Different kinds of surrounding context are also investigated as a complementarity of human clothing features in the cases that the background information is available. Experiments are conducted on a well labeled image database that contains more than 5; 000 images from 20 representative occupation categories. The preliminary study shows the human occupation is reasonably predictable using the proposed clothing features and possible context. © 2011 IEEE.|
|Source Title:||Proceedings of the IEEE International Conference on Computer Vision|
|Appears in Collections:||Staff Publications|
Show full item record
Files in This Item:
There are no files associated with this item.
checked on Dec 13, 2017
checked on Dec 9, 2017
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.