Deep Representation and Graph Learning for Disease Diagnosis on Medical Image Data

PhD Thesis Proposal Defence


Title: "Deep Representation and Graph Learning for Disease Diagnosis on 
Medical Image Data"

by

Mr. Yongxiang HUANG


Abstract:

There is a trend that digitalized clinical data increases dramatically 
every year. Though well-established deep learning models such as 
convolutional neural networks have empowered a wide range of applications 
in the natural image domain, there remain lots of challenges in developing 
reliable deep learning-based diagnosis models in medical image analysis. 
On the one hand, due to privacy protection and the cost of expert 
annotations, publicly available large-scale labeled datasets in the 
medical domain are highly limited. On the other hand, medical image data 
tends to be imperfect and is harder to interpret due to the inherent 
limitations in the imaging systems (e.g., staining noise in histology 
imaging). In this thesis, we propose three works to disentangle the 
challenges in deep learning-based disease diagnosis, covering pathology 
analysis and disease prediction, on medical image data, which hopefully 
contributes to better computer-assisted diagnosis systems.

Firstly, we investigate the histopathologic detection problem on 
high-resolution histology images, where directly applying a deep 
convolutional neural network on the whole image is computationally 
infeasible. As the local details (e.g., nuclei) contain discriminative 
features for identifying carcinoma, downsampling the high-resolution image 
becomes a sub-optimal choice. To address this challenge, we propose a deep 
spatial fusion approach that aggregates the local discriminative features 
and global information spatially to deliver a holistic representation, 
which boosts the cancer detection accuracy in our experiments.

Secondly, we study the problem of cancerous region localization using 
weakly supervised learning. Specifically, we propose an attention and 
gradient guided-based approach that learns to localize the evidence 
supporting the diagnostic decision of interest without requiring 
object-level labels, which eases the intensive labor of expert annotations 
on pathology images and make the black-box diagnostic model more 
interpretable. Comprehensive experiments are conducted to demonstrate the 
effectiveness of our method.

Lastly, we disentangle the challenge of population-based disease 
prediction on multi-modal data, including neuroimaging, genomic, and 
phenotypic modalities. We propose an edge-variational graph convolutional 
network that, on the one side, adaptively constructs a population graph by 
estimating the association between subjects, and, on the other side, 
performs semi-supervised disease prediction with uncertainty estimation 
using graph learning. Extensive experiments show that our approach can 
complementarily combine imaging and non-imaging data to improve the 
predictive performance for brain analysis and disease diagnosis on Autism 
Spectrum Disorder, Alzheimer’s Disease, and multiple ophthalmic diseases.

In the end, we conclude this thesis proposal with future research 
directions on developing clinically deployable learning-based disease 
diagnosis models.


Date:			Thursday, 17 December 2020

Time:                  	2:00pm - 4:00pm

Zoom Meeting: 		https://hkust.zoom.us/j/2730502071

Committee Members:	Prof. Albert Chung (Supervisor)
  			Prof. Pedro Sander (Chairperson)
 			Dr. Qifeng Chen
 			Prof. Chiew-Lan Tai


**** ALL are Welcome ****