Please use this identifier to cite or link to this item: https://doi.org/10.18653/v1/2022.findings-acl.315
Title: Interpreting the Robustness of Neural NLP Models to Textual Perturbations
Authors: Zhang, Yunxiang
Pan, Liangming
Tan, Samson
Kan, Min-Yen 
Keywords: cs.CL
Issue Date: 18-Mar-2022
Publisher: Association for Computational Linguistics
Citation: Zhang, Yunxiang, Pan, Liangming, Tan, Samson, Kan, Min-Yen (2022-03-18). Interpreting the Robustness of Neural NLP Models to Textual Perturbations. Findings of the Association for Computational Linguistics: ACL 2022. ScholarBank@NUS Repository. https://doi.org/10.18653/v1/2022.findings-acl.315
Abstract: Modern Natural Language Processing (NLP) models are known to be sensitive to input perturbations and their performance can decrease when applied to real-world, noisy data. However, it is still unclear why models are less robust to some perturbations than others. In this work, we test the hypothesis that the extent to which a model is affected by an unseen textual perturbation (robustness) can be explained by the learnability of the perturbation (defined as how well the model learns to identify the perturbation with a small amount of evidence). We further give a causal justification for the learnability metric. We conduct extensive experiments with four prominent NLP models -- TextRNN, BERT, RoBERTa and XLNet -- over eight types of textual perturbations on three datasets. We show that a model which is better at identifying a perturbation (higher learnability) becomes worse at ignoring such a perturbation at test time (lower robustness), providing empirical support for our hypothesis.
Source Title: Findings of the Association for Computational Linguistics: ACL 2022
URI: https://scholarbank.nus.edu.sg/handle/10635/229364
DOI: 10.18653/v1/2022.findings-acl.315
Appears in Collections:Staff Publications
Elements

Show full item record
Files in This Item:
File Description SizeFormatAccess SettingsVersion 
2022.findings-acl.315.pdf1.1 MBAdobe PDF

OPEN

PublishedView/Download

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.