From IR Images to Point Clouds to Pose: Point Cloud-Based AR Glasses Pose Estimation

Ahmet Firintepe*, Carolin Vey, Stylianos Asteriadis, Alain Pagani, Didier Stricker

*Corresponding author for this work

Research output: Contribution to journalArticleAcademicpeer-review


In this paper, we propose two novel AR glasses pose estimation algorithms from single infrared images by using 3D point clouds as an intermediate representation. Our first approach "PointsToRotation" is based on a Deep Neural Network alone, whereas our second approach "PointsToPose" is a hybrid model combining Deep Learning and a voting-based mechanism. Our methods utilize a point cloud estimator, which we trained on multi-view infrared images in a semi-supervised manner, generating point clouds based on one image only. We generate a point cloud dataset with our point cloud estimator using the HMDPose dataset, consisting of multi-view infrared images of various AR glasses with the corresponding 6-DoF poses. In comparison to another point cloud-based 6-DoF pose estimation named CloudPose, we achieve an error reduction of around 50%. Compared to a state-of-the-art image-based method, we reduce the pose estimation error by around 96%.

Original languageEnglish
Article number80
Number of pages18
JournalJournal of Imaging
Issue number5
Publication statusPublished - May 2021


  • computer vision
  • augmented reality
  • object pose estimation
  • point clouds
  • deep learning

Cite this