Please use this identifier to cite or link to this item: https://doi.org/10.1145/1322192.1322197
DC FieldValue
dc.titleThe painful face - Pain expression recognition using active appearance models
dc.contributor.authorAshraf A.B.
dc.contributor.authorLucey S.
dc.contributor.authorCohn J.F.
dc.contributor.authorChen T.
dc.contributor.authorAmbadar Z.
dc.contributor.authorPrkachin K.
dc.contributor.authorSolomon P.
dc.contributor.authorTheobald B.-J.
dc.date.accessioned2018-08-21T05:06:52Z
dc.date.available2018-08-21T05:06:52Z
dc.date.issued2007
dc.identifier.citationAshraf A.B., Lucey S., Cohn J.F., Chen T., Ambadar Z., Prkachin K., Solomon P., Theobald B.-J. (2007). The painful face - Pain expression recognition using active appearance models. Proceedings of the 9th International Conference on Multimodal Interfaces, ICMI'07 : 9-14. ScholarBank@NUS Repository. https://doi.org/10.1145/1322192.1322197
dc.identifier.isbn9781595938176
dc.identifier.urihttp://scholarbank.nus.edu.sg/handle/10635/146264
dc.description.abstractPain is typically assessed by patient self-report. Self-reported pain, however, is difficult to interpret and may be impaired or not even possible, as in young children or the severely ill. Behavioral scientists have identified reliable and valid facial indicators of pain. Until now they required manual measurement by highly skilled observers. We developed an approach that automatically recognizes acute pain. Adult patients with rotator cuff injury were video-recorded while a physiotherapist manipulated their affected and unaffected shoulder. Skilled observers rated pain expression from the video on a 5-point Likert-type scale. From these ratings, sequences were categorized as no-pain (rating of 0), pain (rating of 3, 4, or 5), and indeterminate (rating of 1 or 2). We explored machine learning approaches for pain-no pain classification. Active Appearance Models (AAM) were used to decouple shape and appearance parameters from the digitized face images. Support vector machines (SVM) were used with several representations from the AAM. Using a leave-one-out procedure, we achieved an equal error rate of 19% (hit rate = 81%) using canonical appearance and shape features. These findings suggest the feasibility of automatic pain detection from video. Copyright 2007 ACM.
dc.sourceScopus
dc.subjectActive appearance models
dc.subjectAutomatic facial image analysis
dc.subjectFacial expression
dc.subjectPain
dc.subjectSupport vector machines
dc.typeConference Paper
dc.contributor.departmentOFFICE OF THE PROVOST
dc.contributor.departmentDEPARTMENT OF COMPUTER SCIENCE
dc.description.doi10.1145/1322192.1322197
dc.description.sourcetitleProceedings of the 9th International Conference on Multimodal Interfaces, ICMI'07
dc.description.page9-14
dc.published.statepublished
dc.grant.idMOP 77799
dc.grant.idMH 51435
dc.grant.fundingagencyCIHR, Canadian Institutes of Health Research
dc.grant.fundingagencyNIMH, National Institute of Mental Health
Appears in Collections:Staff Publications

Show simple item record
Files in This Item:
There are no files associated with this item.

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.