Events
PhD defence: Active and Physics-Based Human Pose Reconstruction

Disputation
From:
2023-01-13 10:15
to
13:00
Place: MH:Hörmander, Centre for Mathematical Sciences, Sölvegatan 18, LTH Faculty of Engineering, Lund, Sweden
Contact: erik [dot] gartner [at] math [dot] lth [dot] se
Save event to your calendar
Thesis title: Active and Physics-Based Human Pose Reconstruction
Author: Erik Gärtner, Centre for Mathematical Sciences,
Lund University
Faculty opponent: Associate Professor Fahad Khan, Linköping University
and MBZ University of Artificial Intelligence, Abu Dhabi, United Arab
Emirates
Examination Committee:
-
Docent Hossein Azizpour, Royal Institute of Technology
-
Professor Serge Belongie, Copenhagen University, Denmark
-
Assistant Professor Siyu Tang, ETH Zürich, Switzerland
- Deputy: Professor Anders Rantzer, Lund University
Session chair: Professor Görel Hedin, Lund University
Location: MH:Hörmander, Centre for Mathematical Sciences, Sölvegatan 18, LTH Faculty of Engineering, Lund, Sweden
For download: To be updated
Abstract
Perceiving humans is an important and complex problem within computer
vision. Its significance is derived from its numerous applications, such
as human-robot interaction, virtual reality, markerless motion capture,
and human tracking for autonomous driving. The difficulty lies in the
variability in human appearance, physique, and plausible body poses. In
real-world scenes, this is further exacerbated by difficult lighting
conditions, partial occlusions, and the depth ambiguity stemming from
the loss of information during the 3d to 2d projection. Despite these
challenges, significant progress has been made in recent years,
primarily due to the expressive power of deep neural networks trained on
large datasets. However, creating large-scale datasets with 3d
annotations is expensive, and capturing the vast diversity of the real
world is demanding. Traditionally, 3d ground truth is captured using
motion capture laboratories that require large investments. Furthermore,
many laboratories cannot easily accommodate athletic and dynamic
motions. This thesis studies three approaches to improving visual
perception, with emphasis on human pose estimation, that can complement
improvements to the underlying predictor or training data.
The first two papers present active human pose estimation, where a
reinforcement learning agent is tasked with selecting informative
viewpoints to reconstruct subjects efficiently. The papers discard the
common assumption that the input is given and instead allow the agent to
move to observe subjects from desirable viewpoints, e.g., those which
avoid occlusions and for which the underlying pose estimator has a low
prediction error.
The third paper introduces the task of embodied visual active learning,
which goes further and assumes that the perceptual model is not
pre-trained. Instead, the agent is tasked with exploring its environment
and requesting annotations to refine its visual model. Learning to
explore novel scenarios and efficiently request annotation for new data
is a step towards life-long learning, where models can evolve beyond
what they learned during the initial training phase. We study the
problem for segmentation, though the idea is applicable to other
perception tasks.
Lastly, the final two papers propose improving human pose estimation by
integrating physical constraints. These regularize the reconstructed
motions to be physically plausible and serve as a complement to current
kinematic approaches. Whether a motion has been observed in the training
data or not, the predictions should obey the laws of physics. Through
integration with a physical simulator, we demonstrate that we can reduce
reconstruction artifacts and enforce, e.g., contact constraints.