Please use this identifier to cite or link to this item: https://scholarbank.nus.edu.sg/handle/10635/40611
DC FieldValue
dc.titlePEM: A paraphrase evaluation metric exploiting parallel texts
dc.contributor.authorLiu, C.
dc.contributor.authorDahlmeier, D.
dc.contributor.authorNg, H.T.
dc.date.accessioned2013-07-04T08:08:18Z
dc.date.available2013-07-04T08:08:18Z
dc.date.issued2010
dc.identifier.citationLiu, C.,Dahlmeier, D.,Ng, H.T. (2010). PEM: A paraphrase evaluation metric exploiting parallel texts. EMNLP 2010 - Conference on Empirical Methods in Natural Language Processing, Proceedings of the Conference : 923-932. ScholarBank@NUS Repository.
dc.identifier.isbn1932432868
dc.identifier.urihttp://scholarbank.nus.edu.sg/handle/10635/40611
dc.description.abstractWe present PEM, the first fully automatic metric to evaluate the quality of paraphrases, and consequently, that of paraphrase generation systems. Our metric is based on three criteria: adequacy, fluency, and lexical dissimilarity. The key component in our metric is a robust and shallow semantic similarity measure based on pivot language N-grams that allows us to approximate adequacy independently of lexical similarity. Human evaluation shows that PEM achieves high correlation with human judgments. © 2010 Association for Computational Linguistics.
dc.sourceScopus
dc.typeConference Paper
dc.contributor.departmentCOMPUTER SCIENCE
dc.description.sourcetitleEMNLP 2010 - Conference on Empirical Methods in Natural Language Processing, Proceedings of the Conference
dc.description.page923-932
dc.identifier.isiutNOT_IN_WOS
Appears in Collections:Staff Publications

Show simple item record
Files in This Item:
There are no files associated with this item.

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.