RA-L 2023 Best Paper
Seth G. Isaacson*
Pou-Chun Kung*
Mani Ramanagopal
Ram Vasudevan
Katherine A. Skinner
sethgi@umich.edu
pckung@umich.edu
srmani@umich.edu
ramv@umich.edu
kskin@umich.edu
* Equal Contribution
All authors are affiliated with the Robotics Department of the University of Michigan, Ann Arbor.
Paper
GitHub
TL;DR: We propose LONER, the first real-time LiDAR SLAM algorithm that uses a neural-implicit scene representation. Existing implicit mapping methods for LiDAR show promising results in large-scale reconstruction, but either require groundtruth poses or run slower than real-time. In contrast, LONER uses LiDAR data to train an MLP to estimate a dense map in real-time, while simultaneously estimating the trajectory of the sensor. To achieve real-time performance, this paper proposes a novel information-theoretic loss function that accounts for the fact that different regions of the map may be learned to varying degrees throughout online training. The proposed method is evaluated qualitatively and quantitatively on two open-source datasets. This evaluation illustrates that the proposed loss function converges faster and leads to more accurate geometry reconstruction than other loss functions used in depth-supervised neural implicit frameworks. Finally, this paper shows that LONER estimates trajectories competitively with state-of-the-art LiDAR SLAM methods, while also producing dense maps competitive with existing real-time implicit mapping methods that use groundtruth poses.
The system comprises parallel threads for tracking and mapping. The tracking thread processes incoming scans and estimates odometry using ICP. LONER is designed for use without an IMU, so ICP uses the identity transformation as an initial guess. In parallel and at a lower rate, the mapping thread uses the current scan and selected prior scans as KeyFrames, which are used to update the training of the neural scene representation.
We introduce a novel loss function that leads to faster convergence and more accurate reconstruction than existing depth-supervised loss functions.
The qualitative results of LONER on datasets recorded during the DARPA subterranean challenge demonstrate LONER can work in feature sparsity scenes. A video with better quality and corresponding RGB images will be updated soon.
To show more trajectory evaluation details, we show a table not only with APE RMSE but also APE Mean and APE Median.
@ARTICLE{loner2023,
author={Isaacson, Seth and Kung, Pou-Chun and Ramanagopal, Mani and Vasudevan, Ram and Skinner, Katherine A.},
journal={IEEE Robotics and Automation Letters},
title={LONER: LiDAR Only Neural Representations for Real-Time SLAM},
year={2023},
volume={8},
number={12},
pages={8042-8049},
doi={10.1109/LRA.2023.3324521}}
LONER by Ford Center for Autonomous Vehicles at the University of Michigan is licensed under CC BY-NC-SA 4.0