Please use this identifier to cite or link to this item:
|Title:||PEM: A paraphrase evaluation metric exploiting parallel texts|
|Citation:||Liu, C.,Dahlmeier, D.,Ng, H.T. (2010). PEM: A paraphrase evaluation metric exploiting parallel texts. EMNLP 2010 - Conference on Empirical Methods in Natural Language Processing, Proceedings of the Conference : 923-932. ScholarBank@NUS Repository.|
|Abstract:||We present PEM, the first fully automatic metric to evaluate the quality of paraphrases, and consequently, that of paraphrase generation systems. Our metric is based on three criteria: adequacy, fluency, and lexical dissimilarity. The key component in our metric is a robust and shallow semantic similarity measure based on pivot language N-grams that allows us to approximate adequacy independently of lexical similarity. Human evaluation shows that PEM achieves high correlation with human judgments. © 2010 Association for Computational Linguistics.|
|Source Title:||EMNLP 2010 - Conference on Empirical Methods in Natural Language Processing, Proceedings of the Conference|
|Appears in Collections:||Staff Publications|
Show full item record
Files in This Item:
There are no files associated with this item.
checked on Nov 24, 2018
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.