Selected Publications

Management tasks in datacenters are usually executed in-band with the data plane applications, making them susceptible to faults and failures in the data plane. In this paper, we introduce power line communication (PLC) to datacenters as an out-of-band management channel. We design PowerMan, a novel datacenter management network that can be readily built into existing datacenter power systems. With commercially available PLC devices, we implement a small 2-layer prototype with 12 servers. Using this real testbed, as well as large-scale simulations, we demonstrate the potential of PowerMan as a management network in terms of performance, reliability, and cost.

Existing wired optical interconnects face a challenge of supporting wide-spread communications in production clusters. Initial proposals are constrained to only support hotspots between a small number of racks (e.g., 2 or 4) at a time, reconfigurable at milliseconds. Recent efforts on reducing optical circuit reconfiguration time from milliseconds to microseconds partially mitigate this problem by rapidly time-sharing optical circuits across more nodes, but are still limited by the total number of parallel circuits available simultaneously. In this paper, we seek an optical interconnect that can enable unconstrained communications within a computing cluster of thousands of servers. In particular, we present MegaSwitch, a multi-fiber ring optical fabric that exploits space division multiplexing across multiple fibers to deliver rearrangeably non-blocking communications to 30+ racks and 6000+ servers. We have implemented a 5-rack 40-server MegaSwitch prototype with real optical devices, and used testbed experiments as well as large-scale simulations to explore MegaSwitch’s architectural benefits and tradeoffs.

Cloud applications generate a mix of flows with and without deadlines. Scheduling such mix-flows is a key challenge; our experiments show that trivially combining existing schemes for deadline/non-deadline flows is problematic. For example, prioritizing deadline flows hurts flow completion time (FCT) for non-deadline flows, with minor improvement for deadline miss rate. We present Karuna, a first systematic solution for scheduling mix-flows.

Recent Publications

All Publications

  • PowerMan: An Out-of-Band Management Network for Datacenters Using Power Line Communication



  • PIAS: Practical Information-Agnostic Flow Scheduling for Commodity Data Centers


    Details PDF

  • Enabling Wide-spread Communications on Optical Fabric with MegaSwitch


    Details PDF

  • Enabling ECN over Generic Packet Scheduling

    ACM CoNEXT’16


  • Online flow size prediction for improved network routing

    IEEE ICNP’16

    Details PDF


  • RDMA over Converged Ethernet (RoCE) For Large-scale Deep Learning with Amber

    With the rapid growth of model complexity and data volume, deep learning systems require more and more servers to perform parallel training. Currently, deep learning systems with multiple servers and multiple GPUs are usually implemented in a single cluster, which typically employs Infiniband fabric to support Remote Direct Memory Access (RDMA), so as to achieve high throughput and low latency for inter-server transmission. It is expected that, with ever-larger models and data, deep learning systems must scale to multiple network clusters, which necessitates highly efficient inter-cluster networking stack with RDMA support. Since Infiniband is only suited for small-scale clusters of less than thousands of servers, we believe RDMA-over-Converged-Ethernet (RoCE) is a more appropriate networking technology choice for multi-cluster datacenter-scale deep learning. Therefore, we endeavor to incorporate RoCE as the networking technology for deep learning systems, such as Tensorflow and Tencent's Amber.

  • Angel: Network-Accelerated Large-Scale Machine Learning

    Angel is an in-house large scale machine learning framework in Tencent. We cooperated with Technology Engineering Group (TEG), and developed a network accelerator. Via algorithm-specific flow scheduling, We achieved 70x reduction in job completion time compared to vanilla Apache Spark.

  • Chukonu: Application-Aware Networking

    Datacenters exists because of a standalone server/rack can no longer meet the requirements of modern day applications: web search, ad recommendation, online commerce, machine learning, etc. Different from traditional networks, data center networks enjoy high bandwidth, low latency, and minimal packet loss. These features, however, are not fully utilized today, because application developers are usually unfamiliar with datacenter environment and/or networking stack and its tuning. We aim to design a system for application developers to access networking functions in datacenters and unlock its full potential.

Professional & Teaching Experience


  • Software Engineer in Tencent (Jun 2015 - Jul 2017)
  • Technology Analyst in Royal Bank of Scotland (Jun - Aug 2010)

Certified Instructor at NVIDIA Deep Learning Institute (Oct 2017 - Now):

  • Deep Learning Demystified
  • Best Practices for Starting a Deep Learning Project
  • Applications of Deep Learning with Caffe, Theano and Torch
  • Image Classification with DIGITS
  • Object Detection with DIGITS
  • Image Segmentation with TensorFlow
  • Neural Network Deployment

Teaching Assistant at HKUST:

  • COMP 3511 Operating Systems
  • COMP 4621 Computer Communication Networks I
  • ELEC 2100 Signals and Systems
  • ELEC 2600 Probability and Random Processes in Engineering
  • ELEC 4120 Computer Communication Networks
  • ELEC 5350 Multimedia Networking



MSRA Ph.D Fellowship

2011 - Now

HKUST Postgraduates Studentship


HKUST Research Travel Grant


Meritorious Winner of Mathematical Competition of Modeling


The Commercial Radio 50th Anniversary Scholarships

2007 - 2011

HKUST Scholarship for Continuing UG Students