lunduniversity.lu.se

Robotics and Semantic Systems

Computer Science | LTH | Lund University

Denna sida på svenska This page in English

Computer Vision

Pose Estimation from RGB Images of Highly Symmetric Objects using a Novel Multi-Pose Loss and Differential Rendering

Stefan Hein Bengtson and Hampus Åström, Thomas B. Moeslund, Elin A. Topp, Volker Krueger

 

We propose a novel multi-pose loss function to train a neural network for 6D pose estimation, using synthetic data and evaluating it on real images.

Our loss is inspired by the VSD (Visible Surface Discrepancy) metric and relies on a differentiable renderer and CAD models. This novel multi-pose approach produces multiple weighted pose estimates to avoid getting stuck in local minima.Our method resolves pose ambiguities without using predefined symmetries. It is trained only on synthetic data.We test on real-world RGB images from the T-LESS dataset, containing highly symmetric objects common in industrial settings.We show that our solution can be used to replace the codebook in a state-of-the-art approach.So far, the codebook approach has had the shortest inference time in the field. Our approach reduces inference time further while a) avoiding discretization, b) requiring a much smaller memory footprint and c) improving pose recall.

IROS 2021

Continuous hand-eye calibration using 3D points (IEEE-INDIN2017)

Bjarne Großmann and Volker Krüger

The recent development of calibration algorithms has been driven into two major directions: (1) an increasing accuracy of mathematical approaches and (2) an increasing flexibility in usage by reducing the dependency on calibration objects. These two trends, however, seem to be contradictory since the overall accuracy is directly related to the accuracy of the pose estimation of the calibration object and therefore demanding large objects, while an increased flexibility leads to smaller objects or noisier estimation methods.

The method presented in this paper aims to resolves this problem in two steps: First, we derive a simple closed-form solution with a shifted focus towards the equation of translation that only solves for the necessary hand-eye transformation. We show that it is superior in accuracy and robustness compared to traditional approaches. Second, we decrease the dependency on the calibration object to a single 3D-point by using a similar formulation based on the equation of translation which is much less affected by the estimation error of the calibration object’s orientation. Moreover, it makes the estimation of the orientation obsolete while taking advantage of the higher accuracy and robustness from the first solution, resulting in a versatile method for continuous hand-eye calibration.

Page Manager: