Transfer Reinforcement Learning for Task-oriented Dialogue Systems

The Hong Kong University of Science and Technology
Department of Computer Science and Engineering


PhD Thesis Defence


Title: "Transfer Reinforcement Learning for Task-oriented Dialogue Systems"

By

Mr. Kaixiang MO



Abstract

Dialogue systems are attracting more and more attention recently. Dialogue 
systems can be categorized into open-domain dialogue systems and task-oriented 
dialogue systems. Task-oriented dialogue systems are designed to help user 
finish a specific task, and there are four modules, namely the spoken language 
understanding module, the dialogue state tracking module, the dialogue policy 
module and the natural language generation module. One of the most important 
modules is the dialogue policy module, which aims to choose the best reply 
according to the dialogue context. In this thesis, we focus on the dialogue 
policy of task-oriented dialogue systems.

Reinforcement learning is usually used in the dialogue policy. However, 
traditional reinforcement learning algorithm relies heavily on a large number 
of training data and accurate reward signal. Transfer learning can leverage 
knowledge from a source domain and improve the performance of a model in the 
target domain with little target data. However, traditional transfer learning 
focuses on supervised learning setting, and it cannot handle knowledge transfer 
in reinforcement setting since it does not consider the state. Transfer 
reinforcement learning (TRL) aims to transfer dialogue policy knowledge across 
different domains. In the target domain, the state and action can be aligned to 
the source domain state and action, so the dialogue policy can be transferred 
from the source domain to the target domain.

The key to transfer reinforcement learning is learning to build the mapping 
between the source and the target domain, and transfer only domain independent 
common knowledge while minimizing the negative transfer caused by the 
domain-dependent knowledge. In this thesis, we propose a unified framework for 
transfer reinforcement learning problems in the task-oriented dialogue system, 
including 1) How to transfer dialogue policies across different users with 
different preference in personalized task-oriented dialogue system? 2) How to 
transfer fine-granularity common knowledge when the common knowledge is mixed 
with the domain dependent knowledge? 3) How to transfer dialogue policies 
across dialogue systems built with different sets of speech-acts and slots? We 
will use both large-scale simulations and large-scale real-world datasets to 
valid this research. The thesis will also discuss the latest progress in the 
field and point out some future directions for future investigation.


Date:			Friday, 23 February 2018

Time:			2:00pm - 4:00pm

Venue:			Room 3494
 			Lifts 25/26

Chairman:		Prof. Fugee Tsung (IEDA)

Committee Members:	Prof. Qiang Yang (Supervisor)
 			Prof. Lei Chen
 			Prof. Xiaojuan Ma
 			Prof. Pascale Fung (ECE)
 			Prof. Mei Ling Meng (Sys Engg & Engg Mgmt, CUHK)


**** ALL are Welcome ****