Summarization with Deep Transfer Learning

MPhil Thesis Defence


Title: "Summarization with Deep Transfer Learning"

By

Mr. Yuxiang WU


Abstract

There have been many successes in recent neural network-based approaches 
for summarization. Although they achieve impressive results, especially on 
large datasets, they have some limitations. For extractive summarization, 
previous neural approaches focus on improving the saliency of the 
extracted summary, failing to produce readable and coherent summaries. 
Hence, we propose a coherence-reinforced extractive summarization model, 
which transfer the coherence patterns learned by a coherence model to the 
summary extractor through reinforcement learning. The extractive 
summarization model learns to optimize both coherence and saliency 
indicator simultaneously. Experimental results show that the proposed 
model outperforms previous works in both ROUGE scores and human 
evaluation. For abstractive summarization, models often require a 
considerable amount of training data. However, for small domains such as 
Femail, ScienceTech, and Health, training data is insufficient and 
abstractive summarization models often perform poorly on these domains. To 
improve the performance on low-resource domains, we propose transfer 
learning methods for abstractive summarization. We explore both 
parameter-based and feature-based transfer learning methods, including 
pretraining, maximum mean discrepancy, and domain adversarial training. 
Experimental results show that the introduction of transfer learning 
significantly improves the abstractive summarization performance on 
low-resource domains.


Date:			Friday, 10 August 2018

Time:			1:00pm - 3:00pm

Venue:			Room 4475
 			Lifts 25/26

Committee Members:	Prof. Qiang Yang (Supervisor)
 			Dr. Kai Chen (Chairperson)
 			Dr. Xiaojuan Ma


**** ALL are Welcome ****