Character Motion Representation based on Spatial Relationships

Speaker:	Dr. Taku KOMURA
		School of Informatics
		University of Edinburgh

Title:		"Character Motion Representation based on Spatial
		 Relationships"

Date:		Thursday, 22 July 2010

Time:		3:00pm - 4:00pm

Venue:		Room 3311 (via lifts 17/18), HKUST

Abstract:

Close interactions, not necessarily with any contacts, between different
body parts of single or multiple characters or with the environment are
common in computer animation and 3D computer games. Yoga, wrestling,
dancing and moving through a constrained environment are some examples.
Existing scene representations have a fundamental limitation in handling
such close interactions. Currently, a motion is typically described in
terms of joint angles and kinematic constraints such as contacts. With
this representation, automatically computing a valid motion requires
randomized exploration and significant computation for collision
detection. The animator also needs to shoulder the burden of specifying
all the kinematic constraints in advance. From the animator's perspective,
this is impractical and not conductive to manual editing. Competitive
automatic solutions require an effective representation that allows the
extraction of spatial relationships from existing motion data and
synthesis of new animations that preserve these relationships. Such a
representation will not only allow quantitative evaluation of the way
different body parts are interacting, but also facilitate qualitative
characterization of scene semantics.

Our research group has been exploring new representations which considers
the spatial relationships for describing the interactions of multiple
characters or characters in a constraint environment. In this talk, I
introduce two of the proposed methods. The first representation is called
the topology coordinates that describes body parts are twisted around each
other.  The second representation is called the interaction mesh, which
describes which body parts are in close proximity with others. Using these
representations, we can easily edit or retarget human motions while
preserving the context of the scene.  The methodologies have a wide range
of applications in fields of computer animation, pattern recognition and
robotics.  We will first briefly cover the overview of each method, and
then show some demos of applying them to character motion synthesis.

*******************
Biography:

Taku Komura is currently a Lecturer in School of Informatics at the
University of Edinburgh. Before joining University of Edinburgh on 2006,
he worked as an Assistant Professor in City University of Hong Kong
(2002-2006), and as postdoctoral researcher at RIKEN, Japan. He receivied
his PhD (2000), MSc (1997) and BSc (1995) in Information Science from the
University of Tokyo. His research interests include human motion analysis
and synthesis, physically-based animation and topology-based modelling.
His research area covers computer graphics, robotics and biomechanics.