Please use this identifier to cite or link to this item:
https://doi.org/10.1609/aaai.v36i10.21416
Title: | Fusing Task-Oriented and Open-Domain Dialogues in Conversational Agents | Authors: | Young, Tom Xing, Frank Pandelea, Vlad Ni, Jinjie Cambria, Erik |
Keywords: | Science & Technology Technology Computer Science, Artificial Intelligence Computer Science |
Issue Date: | 1-Jan-2022 | Publisher: | ASSOC ADVANCEMENT ARTIFICIAL INTELLIGENCE | Citation: | Young, Tom, Xing, Frank, Pandelea, Vlad, Ni, Jinjie, Cambria, Erik (2022-01-01). Fusing Task-Oriented and Open-Domain Dialogues in Conversational Agents. THIRTY-SIXTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTY-FOURTH CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE / TWELVETH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE 36 (10) : 11622-11629. ScholarBank@NUS Repository. https://doi.org/10.1609/aaai.v36i10.21416 | Abstract: | The goal of building intelligent dialogue systems has largely been separately pursued under two paradigms: task-oriented dialogue (TOD) systems, which perform task-specific functions, and open-domain dialogue (ODD) systems, which focus on non-goal-oriented chitchat. The two dialogue modes can potentially be intertwined together seamlessly in the same conversation, as easily done by a friendly human assistant. Such ability is desirable in conversational agents, as the integration makes them more accessible and useful. Our paper addresses this problem of fusing TODs and ODDs in multi-turn dialogues. Based on the popular TOD dataset MultiWOZ, we build a new dataset FusedChat, by rewriting the existing TOD turns and adding new ODD turns. This procedure constructs conversation sessions containing exchanges from both dialogue modes. It features inter-mode contextual dependency, i.e., the dialogue turns from the two modes depend on each other. Rich dependency patterns such as coreference and ellipsis are included. The new dataset, with 60k new human-written ODD turns and 5k re-written TOD turns, offers a benchmark to test a dialogue model's ability to perform inter-mode conversations. This is a more challenging task since the model has to determine the appropriate dialogue mode and generate the response based on the intermode context. However, such models would better mimic human-level conversation capabilities.We evaluate two baseline models on this task, including the classification-based two-stage models and the two-in-one fused models. We publicly release FusedChat and the baselines to propel future work on inter-mode dialogue systems. | Source Title: | THIRTY-SIXTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTY-FOURTH CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE / TWELVETH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE | URI: | https://scholarbank.nus.edu.sg/handle/10635/241986 | ISSN: | 2159-5399,2374-3468 | DOI: | 10.1609/aaai.v36i10.21416 |
Appears in Collections: | Staff Publications Elements |
Show full item record
Files in This Item:
File | Description | Size | Format | Access Settings | Version | |
---|---|---|---|---|---|---|
21416-Article Text-25429-1-2-20220628.pdf | Published version | 400.74 kB | Adobe PDF | OPEN | Published | View/Download |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.