Please use this identifier to cite or link to this item:
DC FieldValue
dc.titleKnowledge-aware Multimodal Fashion Chatbot
dc.contributor.authorLizi Liao
dc.contributor.authorYou Zhou
dc.contributor.authorYunshan Ma
dc.contributor.authorRichang Hong
dc.contributor.authorTat-Seng Chua
dc.identifier.citationLizi Liao, You Zhou, Yunshan Ma, Richang Hong, Tat-Seng Chua (2018-10-26). Knowledge-aware Multimodal Fashion Chatbot. ACM Multimedia Conference 2018 : 1265-1266. ScholarBank@NUS Repository.
dc.description.abstractMultimodal fashion chatbot provides a natural and informative way to fulfill customers' fashion needs. However, making it 'smart' in generating substantive responses remains a challenging problem. In this paper, we present a multimodal domain knowledge enriched fashion chatbot. It forms a taxonomy-based learning module to capture the fine-grained semantics in images and leverages an end-to-end neural conversational model to generate responses based on the conversation history, visual semantics, and domain knowledge. To avoid inconsistent dialogues, deep reinforcement learning method is used to further optimize the model. © 2018 Copyright held by the owner/author(s).
dc.publisherAssociation for Computing Machinery, Inc
dc.typeConference Paper
dc.contributor.departmentDEPT OF COMPUTER SCIENCE
dc.description.sourcetitleACM Multimedia Conference 2018
dc.grant.fundingagencyInfocomm Media Development Authority
dc.grant.fundingagencyNational Research Foundation
Appears in Collections:Staff Publications

Show simple item record
Files in This Item:
File Description SizeFormatAccess SettingsVersion 
Knowledge-aware Multimodal Fashion Chatbot.pdf4.12 MBAdobe PDF



Google ScholarTM



Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.