We present a novel dataset for dynamic object detection in range data using spatiotemporal normals. Our dataset provides raw, synchronised lidar and inertial data as well as point-wise labels for point dynamicity. We have added the dynamicity information to the Newer College dataset (quad_easy, medium, and ,hard) and we collected a novel dataset referred to as the Techlab dataset. The latter contained two labelled sequences that have been collected in our Techlab facility at the University of Technology Sydney with a 16-beam OS1 Ouster lidar and its embedded imu. A Vicon motion capture system has been used to collect the pose of the lidar relative to an earth-fixed frame. For both datasets, the generation of the ground-truth dynamicity of each point follows a similar two-step process: the first one is mapping the environment without dynamic objects, the second one is the registration of each of the collected lidar points and query the distance between each point and the static map. A point is classified as dynamic if the queried distance is above a certain threshold. Some frames are not labelled due to lack of ground-truth pose. Other points are ignored, as not all the areas observed by the lidar are present in the map. For the Newer college dataset, we have manually cleaned the provided map to remove dynamic objects and used the provided ground-truth trajectory for point registration. For the Techlab dataset, the map has been created with the help of the Vicon system and the lidar data collected in an empty lab. In the first Techlab sequence, three people and the sensor carrier are constantly walking around the room, while in the second sequence, people momentarily stop before starting to move again. Note that with the proposed ground-truth generation process, a person who stops will be considered dynamic despite having a velocity equal to zero.
We provide raw labelled sequences as shown below (red is static, blue is dynamic, and pink is unknown).
We also provide registered bags using the VICON system for localising the LIDAR sensor.
You can download the dataset here. The data were collected using ROS1 and the bags have been been converted for ROS2 using these instructions.
If you used our dataset in your research, please cite our paper:
@inproceedings{legentil2024undistortion,
author={Le Gentil, Cedric and Falque, Raphael and Vidal-Calleja, Teresa},
booktitle={2024 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)},
title={Real-Time Truly-Coupled Lidar-Inertial Motion Correction and Spatiotemporal Dynamic Object Detection},
year={2024},
volume={},
number={},
}