Please use this identifier to cite or link to this item:
Authors: LIAO LIZI
Keywords: multimodal dialogue system; multimodal conversation; fashion agent; multimodal dialogue state tracking; artificial intelligence
Issue Date: 21-Jun-2019
Citation: LIAO LIZI (2019-06-21). DIALOG SYSTEMS GO MULTIMODAL. ScholarBank@NUS Repository.
Abstract: The next generation of user interfaces aims at intelligent systems that are able to adapt to common forms of human dialogues and hence provide more intuitive and natural ways of interaction. This ambitious goal, however, poses new challenges for the design and implementation. First of all, as visual perception is one of the major means of perceiving the environment in addition to text (through speech), it motivates the development of dialogue systems with multimodal understanding ability. Second, to make the system “smart”, knowledge should be incorporated as a foundation to achieve human-like abilities. Third, due to the interactive nature of dialogues, optimizing the policy of interaction is crucial. In this thesis, we aim to conduct a thorough study on how task-oriented dialogue systems could go multimodal. Specifically, we propose a novel multimodal dialogue system framework and carry out explorations on three major issues — multimodal understanding, knowledge incorporation and policy optimization.
Appears in Collections:Ph.D Theses (Open)

Show full item record
Files in This Item:
File Description SizeFormatAccess SettingsVersion 
LiaoLL.pdf15.61 MBAdobe PDF



Page view(s)

checked on Dec 5, 2019

Google ScholarTM


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.