Please use this identifier to cite or link to this item: https://doi.org/10.1117/12.650981
Title: Using context and similarity for face and location identification
Authors: Davis, M.
Smith, M.
Stentiford, F.
Bamidele, A.
Canny, J.
Good, N.
King, S.
Janakiraman, R. 
Keywords: Bluetooth Context-Aware
Cameraphone
Clustering
Content-base Image Retrieval (CBIR)
Face Recognition
GPS
Metadata
Mobility
PCA
SFA
Similarity
Issue Date: 2006
Citation: Davis, M., Smith, M., Stentiford, F., Bamidele, A., Canny, J., Good, N., King, S., Janakiraman, R. (2006). Using context and similarity for face and location identification. Proceedings of SPIE - The International Society for Optical Engineering 6061. ScholarBank@NUS Repository. https://doi.org/10.1117/12.650981
Abstract: This paper describes a new approach to the automatic detection of human faces and places depicted in photographs taken on cameraphones. Cameraphones offer a unique opportunity to pursue new approaches to media analysis and management: namely to combine the analysis of automatically gathered contextual metadata with media content analysis to fundamentally improve image content recognition and retrieval. Current approaches to content-based image analysis are not sufficient to enable retrieval of cameraphone photos by high-level semantic concepts, such as who is in the photo or what the photo is actually depicting. In this paper, new methods for determining image similarity are combined with analysis of automatically acquired contextual metadata to substantially improve the performance of face and place recognition algorithms. For faces, we apply Sparse-Factor Analysis (SFA) to both the automatically captured contextual metadata and the results of PCA (Principal Components Analysis) of the photo content to achieve a 60% face recognition accuracy of people depicted in our database of photos, which is 40% better than media analysis alone. For location, grouping visually similar photos using a model of Cognitive Visual Attention (CVA) in conjunction with contextual metadata analysis yields a significant improvement over color histogram and CVA methods alone. We achieve an improvement in location retrieval precision from 30% precision for color histogram and CVA image analysis, to 55% precision using contextual metadata alone, to 67% precision achieved by combining contextual metadata with CVA image analysis. The combination of context and content analysis produces results that can indicate the faces and places depicted in cameraphone photos significantly better than image analysis or context analysis alone. We believe these results indicate the possibilities of a new context-aware paradigm for image analysis. © 2006 SPIE-IS&T.
Source Title: Proceedings of SPIE - The International Society for Optical Engineering
URI: http://scholarbank.nus.edu.sg/handle/10635/40670
ISBN: 0819461016
ISSN: 0277786X
DOI: 10.1117/12.650981
Appears in Collections:Staff Publications

Show full item record
Files in This Item:
There are no files associated with this item.

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.