Please use this identifier to cite or link to this item: https://scholarbank.nus.edu.sg/handle/10635/226242
DC FieldValue
dc.titleTOWARDS GENERATING DEEP QUESTIONS FROM TEXT
dc.contributor.authorPAN, LIANGMING
dc.date.accessioned2022-05-31T18:00:46Z
dc.date.available2022-05-31T18:00:46Z
dc.date.issued2022-01-12
dc.identifier.citationPAN, LIANGMING (2022-01-12). TOWARDS GENERATING DEEP QUESTIONS FROM TEXT. ScholarBank@NUS Repository.
dc.identifier.urihttps://scholarbank.nus.edu.sg/handle/10635/226242
dc.description.abstractQuestion Generation (QG) concerns the task of automatically generating questions from various inputs such as raw text, database, or semantic representation. People have the ability to ask deep questions about events, evaluation, opinions, synthesis, or reasons, usually in the form of Why, Why-not, How, What-if, which requires an in-depth understanding of the input source and the ability to reason over disjoint relevant contexts. Learning to ask such deep questions has broad application in future intelligent systems, such as dialog systems, online education, and intelligent search, among others. In this thesis, we conduct an in-depth study of Deep Question Generation (DQG): the task of generating deep questions that demand high cognitive skills, especially questions that require multi-hop reasoning over different texts. Specifically, we focus on two fundamental research questions: 1) how to effectively generate deep questions from text; and 2) how deep question generation benefits real-world NLP applications. To explore the first research question, we propose three methods that focus on different challenges of DQG: 1) facilitating the document-level understanding in DQG by building semantic graph representations of the input text; 2) incorporating deep reasoning skills into DQG by explicitly modeling typical reasoning patterns of human question-asking, and 3) generating deep questions with certain human-desired properties via reinforcement learning. To study the second research question, we explore two real-world applications: multi-hop question answering and automated fact-checking. We create synthetic training data for these two tasks with the help of question generation. We show that DQG can greatly benefit the above tasks in terms of boosting performance and reducing human annotations. Finally, we summarize the strengths, weaknesses, and implications of our work, and discuss the future research plan of generalizing our work to a wider range of deep questions.
dc.language.isoen
dc.subjectQuestion Generation, Question Answering, Multi-hop Reasoning, Language Generation, Fact Checking, Natural Language Processing
dc.typeThesis
dc.contributor.departmentINTEGRATIVE SCIENCES & ENGINEERING PROG
dc.contributor.supervisorMin-Yen Kan
dc.contributor.supervisorTat Seng Chua
dc.description.degreePh.D
dc.description.degreeconferredDOCTOR OF PHILOSOPHY (NUSGS)
dc.identifier.orcid0000-0002-7789-7739
Appears in Collections:Ph.D Theses (Open)

Show simple item record
Files in This Item:
File Description SizeFormatAccess SettingsVersion 
PanLiangming_Thesis_FinalVersion_V4.pdf5.71 MBAdobe PDF

OPEN

NoneView/Download

Google ScholarTM

Check


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.