Please use this identifier to cite or link to this item: https://doi.org/10.1109/ICCV.2009.5459189
DC FieldValue
dc.titleSimultaneous and orthogonal decomposition of data using multimodal discriminant analysis
dc.contributor.authorSim, T.
dc.contributor.authorZhang, S.
dc.contributor.authorLi, J.
dc.contributor.authorChen, Y.
dc.date.accessioned2013-07-04T07:58:47Z
dc.date.available2013-07-04T07:58:47Z
dc.date.issued2009
dc.identifier.citationSim, T., Zhang, S., Li, J., Chen, Y. (2009). Simultaneous and orthogonal decomposition of data using multimodal discriminant analysis. Proceedings of the IEEE International Conference on Computer Vision : 452-459. ScholarBank@NUS Repository. https://doi.org/10.1109/ICCV.2009.5459189
dc.identifier.isbn9781424444205
dc.identifier.urihttp://scholarbank.nus.edu.sg/handle/10635/40195
dc.description.abstractWe present Multimodal Discriminant Analysis (MMDA), a novel method for decomposing variations in a dataset into independent factors (modes). For face images, MMDA effectively separates personal identity, illumination and pose into orthogonal subspaces. MMDA is based on maximizing the Fisher Criterion on all modes at the same time, and is therefore well-suited for multimodal and mode-invariant pattern recognition. We also show that MMDA may be used for dimension reduction, and for synthesizing images under novel illumination and even novel personal identity. ©2009 IEEE.
dc.description.urihttp://libproxy1.nus.edu.sg/login?url=http://dx.doi.org/10.1109/ICCV.2009.5459189
dc.sourceScopus
dc.typeConference Paper
dc.contributor.departmentCOMPUTATIONAL SCIENCE
dc.description.doi10.1109/ICCV.2009.5459189
dc.description.sourcetitleProceedings of the IEEE International Conference on Computer Vision
dc.description.page452-459
dc.description.codenPICVE
dc.identifier.isiut000294955300058
Appears in Collections:Staff Publications

Show simple item record
Files in This Item:
There are no files associated with this item.

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.