LONER 🚶

LiDAR Only Neural Representations for Real-time SLAM

RA-L 2023

Seth G. Isaacson*

Pou-Chun Kung*

Mani Ramanagopal

Ram Vasudevan

Katherine A. Skinner

sethgi@umich.edu

pckung@umich.edu

srmani@umich.edu

ramv@umich.edu

kskin@umich.edu

* Equal Contribution

All authors are affiliated with the Robotics Department of the University of Michigan, Ann Arbor.

Paper

GitHub

Overview Video

Rendered depth in reconstructed map

Abstract

TL;DR: We propose LONER, the first real-time LiDAR SLAM algorithm that uses a neural-implicit scene representation. Existing implicit mapping methods for LiDAR show promising results in large-scale reconstruction, but either require groundtruth poses or run slower than real-time. In contrast, LONER uses LiDAR data to train an MLP to estimate a dense map in real-time, while simultaneously estimating the trajectory of the sensor. To achieve real-time performance, this paper proposes a novel information-theoretic loss function that accounts for the fact that different regions of the map may be learned to varying degrees throughout online training. The proposed method is evaluated qualitatively and quantitatively on two open-source datasets. This evaluation illustrates that the proposed loss function converges faster and leads to more accurate geometry reconstruction than other loss functions used in depth-supervised neural implicit frameworks. Finally, this paper shows that LONER estimates trajectories competitively with state-of-the-art LiDAR SLAM methods, while also producing dense maps competitive with existing real-time implicit mapping methods that use groundtruth poses.

System Overview

The system comprises parallel threads for tracking and mapping. The tracking thread processes incoming scans and estimates odometry using ICP. LONER is designed for use without an IMU, so ICP uses the identity transformation as an initial guess. In parallel and at a lower rate, the mapping thread uses the current scan and selected prior scans as KeyFrames, which are used to update the training of the neural scene representation.

Novel JS Loss

We introduce a novel loss function that leads to faster convergence and more accurate reconstruction than existing depth-supervised loss functions.

More Demo Videos

Incremental Meshing Video

Input View Rendering

Novel View Rendering

Rendering in DARPA SubT Challenge Data Sequence

The qualitative results of LONER on datasets recorded during the DARPA subterranean challenge demonstrate LONER can work in feature sparsity scenes. A video with better quality and corresponding RGB images will be updated soon.

Trajectory Evaluation

To show more trajectory evaluation details, we show a table not only with APE RMSE but also APE Mean and APE Median.

Citation