Image-based Urban Modeling

PhD Thesis Proposal Defence


Title: "Image-based Urban Modeling"

by

Mr. Tian FANG


ABSTRACT:

Given the fact that the ambitious digital earth projects, e.g. Google Earth and 
Microsoft Virtual Earth, are trying to convert the world we are living into the 
form of 3D models, there are high demands on 3D modeling of urban environments. 
In urban areas, buildings and trees greatly affect the landscape of urban 
areas. How to reconstruct 3D models for buildings and trees is an important 
problem for urban modeling. The large number of buildings and trees requires 
cheaper and more automatic approaches to be developed. Traditional 
scanner-based approaches require expensive equipments and can only capture 
unstructured 3D points without photometric appearance of the scenes, while 
manual editing approaches require lots of man-power. Here, image-based modeling 
which can reconstruct the mathematical 3D representation of objects from images 
with registered color texture map provides a tempting solution.

In contrast to traditional image-based modeling that relies on general 
smoothness assumption of the reconstructed surface to automatic recover 
irregular surface meshes or requires fully manual editing to build up the 
correspondence among images to generate a regularized surface representation, 
in this thesis, we propose methods to make the process of creating regularized 
mesh models from images more easily. To accomplish this task, there are two 
challenges. The first one is how to reconstruct unstructured 3D point clouds 
from large number of urban images robustly. The other one is how to turn 
unstructured 3D point clouds into regularized mesh models more easily.

To handle the first challenge, we describe a large scale quasi-dense structure 
from motion system. Based on hierarchical structure from motion, a resampling 
scheme is proposed to select dominant correspondences which yield a good 
reconstruction, while the quality of reconstruction is maintained as good as if 
all correspondences are involved. Therefore, even large scale reconstruction 
can benefit from the robustness brought by the large number of propagated 
matches of quasi-dense approaches.

To tackle the second challenge, we introduce prior knowledges into the modeling 
of trees and buildings to automate and ease the modeling process. To model 
trees, we describe a system based on single image. Given a near orthogonal 
image of a tree, as few as two strokes, one for marking a visible branch and 
the other for marking the tree crown, are required to model a photo-realistic 
tree. The marked visible branches are used to guide a branch tracing algorithm 
to extract remaining visible branches automatically. The extracted visible 
branches are used to construct a branch library which will later be grown using 
a non-parametric growing algorithm under the constrain of the extracted tree 
crown.

To reconstruct buildings, we propose a concept of unwrappable facades which 
generalizes the traditional concept of elevations to unwrappable surface. An 
unwrappable surface is a space surface defined by two orthogonal families of
planar curves, a horizontal base shape and a vertical profile. We first propose 
a semi-automatic method to recover one single unwrappable facade which defines 
the principal structure of a building. This is carried out through the recovery 
of its principal direction, its base shape and its profile from the input data. 
We then propose an approximation approach that uses piecewise unwrappable 
surfaces for the modeling of more general buildings. We finalize the model with 
global texture optimization and analysis. Also some interactive tools and image 
analysis techniques are proposed to introduce any desired geometry details on 
top of the principal structure of the building. The method has been validated 
on a variety of buildings.

To deploy our image-based unwrappbale facade modeling at large scale, a crucial 
problem is how to automatic partition the input data, including images and 3D 
point clouds, into individual facades which are manageable for modeling. To 
solve this problem, given an assumption that most of the facades are 
rectilinear, an automatic facade partition scheme that uses the natural 
vertical line on the building to partition is proposed. This scheme takes 
reconstructed 3D point clouds and 3D lines as input. The input data is first 
over-partitioned into sub-facades using reconstructed vertical 3D lines. Then 4 
superior features, height of sub-facades, strip histogram, the number of 
intersections and edge response, are used to merge the sub-facades into 
meaningful facades. After the partition, each facade can be regularized 
locally, so that our former methods for modeling single facade can be applied. 
The results of large scale reconstruction are also demonstrated.


Date:                   Monday, 18 April 2011

Time:                   4:00pm - 6:00pm

Venue:                  Room 2612A
                         lifts 31/32

Committee Members:      Prof. Long Quan (Supervisor)
                         Prof. Chi-Keung Tang (Chairperson)
 			Dr. Pedro Sander
 	                Dr. Chiew-Lan Tai


**** ALL are Welcome ****