Learning to recreate our visual world

Speaker:        Jun-Yan Zhu
                Berkeley AI Research (BAIR) Lab
                University of California, Berkeley

Title:          "Learning to recreate our visual world"

Date:           Monday, 16 October 2017

Time:           4:00pm - 5:00pm

Venue:          Lecture Theater F (near lift no. 25/26), HKUST

Abstract:

Humans are all consumers of visual content. Every day, people watch
videos, play digital games and share photos on social media. However,
there is still an asymmetry - not that many of us are creators. We aim to
build machines capable of creating and manipulating photographs, and we
use them as training wheels for visual content creation, with the goal of
making people more visually literate. We propose to learn natural image
statistics directly from large-scale data. We then define a class of image
generation and editing operations, constraining their output to look
realistic according to the learned image statistics.

I will discuss a few recent projects. First, we propose to directly model
the natural image manifold via generative adversarial networks (GANs) and
constrain the output of a photo editing tool to lie on this manifold.
Then, we present a general image-to-image translation framework,
"pix2pix", where a network is trained to map input images (such as user
sketches) directly to natural looking results. Finally, we introduce
CycleGAN, which learns image-to-image translation models even in the
absence of paired training data, and we additionally demonstrate its
application to style transfer, object transfiguration, season transfer,
and photo enhancement.


********************
Biography:

Jun-Yan Zhu is a Ph.D. student at the Berkeley AI Research (BAIR) Lab,
working on computer vision, graphics and machine learning with Professor
Alexei A. Efros. He received his B.E. from Tsinghua University in 2012 and
was a Ph.D. student at CMU from 2012-13. His research goal is to build
machines capable of recreating the visual world. Jun-Yan is currently
supported by the Facebook Graduate Fellowship.