Spencer Carmichael1
Rahul Agrawal1*
Ram Vasudevan1,2
Katherine A. Skinner1
specarmi@umich.edu
rahulagr@umich.edu
ramv@umich.edu
kskin@umich.edu
1Robotics Department, University of Michigan, Ann Arbor
2Department of Mechanical Engineering, University of Michigan, Ann Arbor
*Rahul Agrawal contributed to this work while employed at University of Michigan.
Preprint (will be updated with appendix soon)
Code
NSAVP Dataset
Recognizing places from an opposing viewpoint during a return trip is a common experience for human drivers. However, the analogous robotics capability, visual place recognition (VPR) with limited field of view cameras under 180 degree rotations, has proven to be challenging to achieve. To address this problem, this paper presents Same Place Opposing Trajectory (SPOT), a technique for opposing viewpoint VPR that relies exclusively on structure estimated through stereo visual odometry (VO). The method extends recent advances in lidar descriptors and utilizes a novel double (similar and opposing) distance matrix sequence matching method. We evaluate SPOT on a publicly available dataset with 6.7-7.6 km routes driven in similar and opposing directions under various lighting conditions. The proposed algorithm demonstrates remarkable improvement over the state-of-the-art, achieving up to 91.7% recall at 100% precision in opposing viewpoint cases, while requiring less storage than all baselines tested and running faster than all but one. Moreover, the proposed method assumes no a priori knowledge of whether the viewpoint is similar or opposing, and also demonstrates competitive performance in similar viewpoint cases.
@misc{carmichael2024spot,
title={SPOT: Point Cloud Based Stereo Visual Place Recognition for Similar and Opposing Viewpoints},
author={Spencer Carmichael and Rahul Agrawal and Ram Vasudevan and Katherine A. Skinner},
year={2024},
eprint={2404.12339},
archivePrefix={arXiv},
primaryClass={cs.RO}}