Evaluating and Understanding Adversarial Robustness in Deep Learning

Speaker:        Mr. Jinghui Chen
                UCLA

Title:          "Evaluating and Understanding Adversarial Robustness in
                 Deep Learning"

Date:           Tuesday, 2 February 2021

Time:           10am - 11am

Zoom Meeting:
https://hkust.zoom.us/j/465698645?pwd=c2E4VTE3b2lEYnBXcyt4VXJITXRIdz09

Meeting ID:     465 698 645
Passcode:       20202021


Abstract:

Deep Neural Networks (DNNs) have made many breakthroughs in different
areas of artificial intelligence. However, it is well-known that DNNs are
vulnerable to adversarial examples. This raises some serious concerns
regarding the robustness of DNNs and leads to an increasing need for
robust DNN models. In this talk, I will focus on evaluating and
understanding the adversarial robustness in deep learning. First, I will
present a new hard-label attack-based model robustness evaluation method,
which is completely gradient-free. Our evaluation method is able to
identify "falsely robust" models that may deceive the traditional
white-box/black-box attacks and give a false sense of robustness. I will
then discuss how network architecture (i.e., network width) affects
adversarial robustness and provide guidance on how to fully unleash the
power of wide model architectures.


*****************
Biography:

Jinghui Chen is a Ph.D. candidate at the Computer Science Department,
UCLA, advised by Professor Quanquan Gu. Before coming to UCLA, he obtained
his BEng degree in Electronic Engineering and Information Science from the
University of Science and Technology of China. He has also interned at IBM
T.J. Watson Research Center, JD AI Research, Twitter, and Microsoft. His
research interests are broadly in machine learning and its applications on
real-world problems, with a recent focus on adversarial machine learning.