Skip to main content
WHAT LAB, which is short for WeChat-HKUST Joint Lab on Artificial Intelligence Technology, is dedicated to foster artificial intelligence and big data research to improve people’s living and advance the frontiers of knowledge, marking a milestone in the collaboration of WeChat and the higher education sector.

WeChat and HKUST will jointly conduct Artificial Intelligence (AI) Technology related research and explore the far-reaching frontiers of AI. Today AI technology is experiencing a tremendous growth, and much of this advance depends on talents, problems and data. WeChat and HKUST complement each other in these aspects, and this collaboration on AI research is expected to be long-term and world-leading. Research areas of WHAT LAB include intelligent robotic systems, natural language processing, data mining, speech recognition and understanding. The Lab will bring together top researchers in the development of innovative artificial intelligence application with the database of WeChat.

Research areas


Machine Reading system

Developed by Yuxiang Wu, Prof. Qiang Yang’s Group

Demo Introduction: Machine Reading aims to develop Machine Learning algorithms that could read and comprehend natural language documents as humans do. With Machine Reading, natural language information is converted to the form that could be processed by computers, and could be further utilized in applications such as summarization, question answering and dialogue system.

Moments Articles Real Time Propagation and Visualization System

Developed by Quan Li, Dongyu Liu, Haiyan Yang, Prof. Huamin Qu’s Group

Demo Introduction: In this project, we visually investigated how official public account article information propagate in WeChat platform from different perspectives, involving a 3D global overview, time-varying propagation view, community detection view, etc. We also implemented several designs by using real propagation data, including the propagation clock, propagation wave, propagation galaxy and propagating tree. The system --- WeSeer has already been deployed and applied to WeChat, Tencent for daily propagation analysis.

Model-based Global Localization for Aerial Robots using Edge Alignments

Developed by Kejie Qiu, Prof. Shaojie Shen’s Group

Demo Introduction: "The video contains three parts; The first part presents the localization accuracy and global consistency by comparing with the ground truth provided in the indoor environment. Three trajectories of model-only, model+EKF​ and ground truth are shown in different colors. The 3D model used for localization is shown as dense point cloud. The second part shows the real-time localization results in outdoor case.
Four trajectories of model-only, model+EKF, VINS and GPS are represented in different colors. The 3D model used for localization is shown as sparse point cloud for display efficiency consideration. The special outdoor case with unstable GPS is also included. Three image views including the raw fisheye image, the electronically stabilized image, and the rendered image are shown simultaneously. Besides comparing different trajectories, it is intuitive to tell the localization performance by visually comparing the similarity between the stabilized image and the rendered image. The third part of the video shows the closed-loop control by a trajectory tracking experiment using the proposed method for state feedback."