Image-based Urban Modeling

The Hong Kong University of Science and Technology
Department of Computer Science and Engineering


PhD Thesis Defence


Title: "Image-based Urban Modeling"

By

Mr. Tian FANG


Abstract

Nowadays, there are high demands on 3D modeling of urban environments. In 
urban areas, buildings and trees greatly affect the landscape of urban 
areas. How to reconstruct 3D models for buildings and trees is an 
important problem for urban modeling. The large number of buildings and 
trees requires cheaper and more automatic approaches to be developed. 
Traditional scanner-based approaches require expensive equipment and can 
only capture unstructured 3D points without photometric appearance of the 
scenes, while manual editing approaches require lots of man-power. Here, 
image-based modeling which can reconstruct the mathematical 3D 
representation of objects from images with registered color texture map 
provides a tempting solution.

In contrast to traditional image-based building modeling that relies on 
general smoothness assumption of the reconstructed surface to automatic 
recover irregular surface meshes or requires fully manual editing to build 
up the correspondence among images to generate a regularized surface 
representation, in this thesis, we target to create regularized 3D facade 
models with less user interactions. We propose methods to improve the key 
aspects of existing work flow. First, a resampling scheme is proposed to 
select dominant correspondences which yield a good result for large scale 
quasi-dense reconstruction as if all correspondences are involved. Then, 
to model single facade, we propose a concept of unwrappable facades which 
generalizes the traditional concept of elevations to unwrappable surface. 
This representation enables us to model larger range of facades with a 
global shape description semi-automatically than previous methods do. This 
representation further makes adding detailed decorations more easily and 
image-based facade synthesis possible. Finally, we present a façade 
partition scheme that use the natural vertical lines on the building to 
automatically separate a large number of urban images and 3D point clouds 
into the granularity of facade level. Automatic image-based facade 
modeling becomes possible with our partition scheme.

To overcome the drawbacks of existing image-based tree modeling 
techniques, e.g. lacks of complete multiple view data and tedious user 
interaction on preprocessing, we describe a system to model a tree with 
single image. Given a near orthogonal image of a tree, as few as two 
strokes, one for marking a visible branch and the other for marking the 
tree crown, are required to model a photo-realistic tree. The marked 
visible branches are used to guide a branch tracing algorithm to extract 
remaining visible branches automatically. The extracted visible branches 
are used to construct a branch library which is later grown using a 
non-parametric growing algorithm under the constraint of the extracted 
tree crown. We also demonstrate that additional information from multiple 
view images and laser scanner can be used to further eliminate this user 
interaction with the help of joint segmentation and analysis.


Date:			Friday, 5 August 2011

Time:			2:00pm – 4:00pm

Venue:			Room 3584
 			Lifts 27/28

Chairman:		Prof. Yang Leng (MECH)

Committee Members:	Prof. Long Quan (Supervisor)
 			Prof. Huamin Qu
 			Prof. Chiew-Lan Tai
                         Prof. Kai Tang (MECH)
                         Prof. Shing-Chow Chan (Elec. & Elec. Engg., HKU)


**** ALL are Welcome ****