Dynamic Unit Surgery for Deep Neural Network Compression and Acceleration

MPhil Thesis Defence


Title: "Dynamic Unit Surgery for Deep Neural Network Compression and 
Acceleration"

By

Mr. Minsam KIM


Abstract

Successful deep neural network models tend to possess millions of parameters. 
Reducing the size of such models by pruning parameters has recently earned 
significant interest from the research community, allowing more compact models 
with similar performance level. While pruning parameters usually result in 
large sparse weight tensors which cannot easily lead to proportional 
improvement in computational efficiency, pruning filters or entire units allow 
readily available off-the-shelf libraries to harness the benefit of smaller 
architecture. One of the most well-known aspects of network pruning is that the 
final retained performance can be improved by making the process of pruning 
more gradual. Most existing techniques smooth the process by repeating the 
technique (multi-pass) at increasing pruning ratios, or by applying the method 
in a layer-wise fashion. In this paper, we introduce Dynamic Unit Surgery (DUS) 
that smooths the process in a novel way by using decaying mask values, instead 
of multi-pass or layer-wise treatment. While multi-pass schemes entirely 
discard network components pruned at the early stage, DUS allows recovery of 
such components. We empirically show that DUS achieves competitive performance 
against existing state-of-the-art pruning techniques in multiple image 
classification task using VGGnet, ResNet, and WideResNet+MixUp. We also explore 
the method's application to transfer learning environment for fine-grained 
image classification and report its competitiveness against state-of-the-art 
baseline.


Date:  			Monday, 17 August 2020

Time:			1:30pm - 3:30pm

Zoom meeting:		https://hkust.zoom.us/j/3542359066

Committee Members:	Prof. James Kwok (Supervisor)
  			Prof. Nevin Zhang (Chairperson)
 			Prof. Dit-Yan Yeung


**** ALL are Welcome ****