First Author: Jie Yin
Figure 1. Sample Images
2022.12.13 Our new work coming soon! Codes and datasets will be available in https://github.com/sjtuyinjie/M2DGR-plus and https://github.com/SJTU-ViSYS/Ground-Fusion upon paper acceptance.
2022.9.13 welcome to follow and star our new work: Ground-Challenge at https://github.com/sjtuyinjie/Ground-Challenge. Feel free to propose issues if needed.
2022.06.20 Thanks Jialin Liu (Fudan University) for his work to test LVI-SAM on M2DGR. Following is the link of their modified LVI-SAM version link. Refer Link for detailed information. And the configuration files for LVI-SAM on M2DGR are given in launch file,camera file and lidar file. Feel free to test the demo on your machine!
LVI-SAM on M2DGR
2022.02.18 We have upload a brand new SLAM dataset with GNSS, vision and IMU information. Here is our link SJTU-GVI. Different from M2DGR, new data is captured on a real car and it records GNSS raw measurements with a Ublox ZED-F9P device to facilitate GNSS-SLAM. Give us a star and folk the project if you like it.
2022.02.01 Our work has been accepted by ICRA2022!
We strongly recommend that the newly proposed SLAM algorithm be tested on our data, because our data has following features:
- A rich pool of sensory information including vision, lidar, IMU, GNSS,event, thermal-infrared images and so on
- Various scenarios in real-world environments including lifts, streets, rooms, halls and so on.
- Our dataset brings great challenge to existing SLAM algorithms including LIO-SAM and ORB-SLAM3. If your proposed algorihm outperforms SOTA systems on M2DGR, your paper will be much more convincing and valuable.
We introduce M2DGR: a novel large-scale dataset collected by a ground robot with a full sensor-suite including six fish-eye and one sky-pointing RGB cameras, an infrared camera, an event camera, a Visual-Inertial Sensor (VI-sensor), an inertial measurement unit (IMU), a LiDAR, a consumer-grade Global Navigation Satellite System (GNSS) receiver and a GNSS-IMU navigation system with real-time kinematic (RTK) signals. All those sensors were well-calibrated and synchronized, and their data were recorded simultaneously. The ground truth trajectories were obtained by the motion capture device, a laser 3D tracker, and an RTK receiver. The dataset comprises 36 sequences (about 1TB) captured in diverse scenarios including both indoor and outdoor environments. We evaluate state-of-the-art SLAM algorithms on M2DGR. Results show that existing solutions perform poorly in some scenarios. For the benefit of the research community, we make the dataset and tools public.
Keywords:Dataset, Multi-model, Multi-scenario,Ground Robot
- We collected long-term challenging sequences for ground robots both indoors and outdoors with a complete sensor suite, which includes six surround-view fish-eye cameras, a sky-pointing fish-eye camera, a perspective color camera, an event camera, an infrared camera, a 32-beam LIDAR, two GNSS receivers, and two IMUs. To our knowledge, this is the first SLAM dataset focusing on ground robot navigation with such rich sensory information.
- We recorded trajectories in a few challenging scenarios like lifts, complete darkness, which can easily fail existing localization solutions. These situations are commonly faced in ground robot applications, while they are seldom discussed in previous datasets.
- We launched a comprehensive benchmark for ground robot navigation. On this benchmark, we evaluated existing state-of-the-art SLAM algorithms of various designs and analyzed their characteristics and defects individually.
This work is licensed under MIT license. International License and is provided for academic purpose. If you are interested in our project for commercial purposes, please contact us on [email protected] for further communication.
If you face any problem when using this dataset, feel free to propose an issue. And if you find our dataset helpful in your research, simply give this project a .
The paper has been accepted by both RA-L and ICRA 2022. A preprint version of the paper in Arxiv and IEEE RA-L.If you use M2DGR in an academic work, please cite:
@ARTICLE{9664374,
author={Yin, Jie and Li, Ang and Li, Tao and Yu, Wenxian and Zou, Danping},
journal={IEEE Robotics and Automation Letters},
title={M2DGR: A Multi-sensor and Multi-scenario SLAM Dataset for Ground Robots},
year={2021},
volume={},
number={},
pages={1-1},
doi={10.1109/LRA.2021.3138527}}
Physical drawings and schematics of the ground robot is given below. The unit of the figures is centimeter.
Figure 2. The GAEA Ground Robot Equipped with a Full Sensor Suite.The directions of the sensors are marked in different colors,red for X,green for Y and blue for Z.
All the sensors and track devices and their most important parameters are listed as below:
-
LIDAR Velodyne VLP-32C, 360 Horizontal Field of View (FOV),-30 to +10 vertical FOV,10Hz,Max Range 200 m,Range Resolution 3 cm, Horizontal Angular Resolution 0.2°.
-
RGB Camera FLIR Pointgrey CM3-U3-13Y3C-CS,fish-eye lens,1280*1024,190 HFOV,190 V-FOV, 15 Hz
-
GNSS Ublox M8T, GPS/BeiDou, 1Hz
-
Infrared Camera,PLUG 617,640*512,90.2 H-FOV,70.6 V-FOV,25Hz;
-
V-I Sensor,Realsense d435i,RGB/Depth 640*480,69H-FOV,42.5V-FOV,15Hz;IMU 6-axix, 200Hz
-
Event Camera Inivation DVXplorer, 640*480,15Hz;
-
IMU,Handsfree A9,9-axis,150Hz;
-
GNSS-IMU Xsens Mti 680G. GNSS-RTK,localization precision 2cm,100Hz;IMU 9-axis,100 Hz;
-
Laser Scanner Leica MS60, localization 1mm+1.5ppm
-
Motion-capture System Vicon Vero 2.2, localization accuracy 1mm, 50 Hz;
The rostopics of our rosbag sequences are listed as follows:
-
LIDAR:
/velodyne_points
-
RGB Camera:
/camera/left/image_raw/compressed
,
/camera/right/image_raw/compressed
,
/camera/third/image_raw/compressed
,
/camera/fourth/image_raw/compressed
,
/camera/fifth/image_raw/compressed
,
/camera/sixth/image_raw/compressed
,
/camera/head/image_raw/compressed
-
GNSS Ublox M8T:
/ublox/aidalm
,
/ublox/aideph
,
/ublox/fix
,
/ublox/fix_velocity
,
/ublox/monhw
,
/ublox/navclock
,
/ublox/navpvt
,
/ublox/navsat
,
/ublox/navstatus
,
/ublox/rxmraw
-
Infrared Camera:
/thermal_image_raw
-
V-I Sensor:
/camera/color/image_raw/compressed
,
/camera/imu
-
Event Camera:
/dvs/events
,
/dvs_rendering/compressed
-
IMU:
/handsfree/imu
We make public ALL THE SEQUENCES with their GT now.
Figure 3. A sample video with fish-eye image(both forward-looking and sky-pointing),perspective image,thermal-infrared image,event image and lidar odometry
An overview of M2DGR is given in the table below:
Scenario | Street | Circle | Gate | Walk | Hall | Door | Lift | Room | Roomdark | TOTAL |
---|---|---|---|---|---|---|---|---|---|---|
Number | 10 | 2 | 3 | 1 | 5 | 2 | 4 | 3 | 6 | 36 |
Size/GB | 590.7 | 50.6 | 65.9 | 21.5 | 117.4 | 46.0 | 112.1 | 45.3 | 171.1 | 1220.6 |
Duration/s | 7958 | 478 | 782 | 291 | 1226 | 588 | 1224 | 275 | 866 | 13688 |
Dist/m | 7727.72 | 618.03 | 248.40 | 263.17 | 845.15 | 200.14 | 266.27 | 144.13 | 395.66 | 10708.67 |
Ground Truth | RTK/INS | RTK/INS | RTK/INS | RTK/INS | Leica | Leica | Leica | Mocap | Mocap | --- |
Figure 4. Outdoor Sequences:all trajectories are mapped in different colors.
Sequence Name | Collection Date | Total Size | Duration | Features | Rosbag | GT |
---|---|---|---|---|---|---|
gate_01 | 2021-07-31 | 16.4g | 172s | dark,around gate | Rosbag | GT |
gate_02 | 2021-07-31 | 27.3g | 327s | dark,loop back | Rosbag | GT |
gate_03 | 2021-08-04 | 21.9g | 283s | day | Rosbag | GT |
Sequence Name | Collection Date | Total Size | Duration | Features | Rosbag | GT |
---|---|---|---|---|---|---|
Circle_01 | 2021-08-03 | 23.3g | 234s | Circle | Rosbag | GT |
Circle_02 | 2021-08-07 | 27.3g | 244s | Circle | Rosbag | GT |
Sequence Name | Collection Date | Total Size | Duration | Features | Rosbag | GT |
---|---|---|---|---|---|---|
street_01 | 2021-08-06 | 75.8g | 1028s | street and buildings,night,zigzag,long-term | Rosbag | GT |
street_02 | 2021-08-03 | 83.2g | 1227s | day,long-term | Rosbag | GT |
street_03 | 2021-08-06 | 21.3g | 354s | night,back and fourth,full speed | Rosbag | GT |
street_04 | 2021-08-03 | 48.7g | 858s | night,around lawn,loop back | Rosbag | GT |
street_05 | 2021-08-04 | 27.4g | 469s | night,staight line | Rosbag | GT |
street_06 | 2021-08-04 | 35.0g | 494s | night,one turn | Rosbag | GT |
street_07 | 2021-08-06 | 77.2g | 929s | dawn,zigzag,sharp turns | Rosbag | GT |
street_08 | 2021-08-06 | 31.2g | 491s | night,loop back,zigzag | Rosbag | GT |
street_09 | 2021-08-07 | 83.2g | 907s | day,zigzag | Rosbag | GT |
street_010 | 2021-08-07 | 86.2g | 910s | day,zigzag | Rosbag | GT |
walk_01 | 2021-08-04 | 21.5g | 291s | day,back and fourth | Rosbag | GT |
Figure 5. Lift Sequences:The robot hang around a hall on the first floor and then went to the second floor by lift.A laser scanner track the trajectory outside the lift
Sequence Name | Collection Date | Total Size | Duration | Features | Rosbag | GT |
---|---|---|---|---|---|---|
lift_01 | 2021-08-04 | 18.4g | 225s | lift | Rosbag | GT |
lift_02 | 2021-08-04 | 43.6g | 488s | lift | Rosbag | GT |
lift_03 | 2021-08-15 | 22.3g | 252s | lift | Rosbag | GT |
lift_04 | 2021-08-15 | 27.8g | 299s | lift | Rosbag | GT |
Sequence Name | Collection Date | Total Size | Duration | Features | Rosbag | GT |
---|---|---|---|---|---|---|
hall_01 | 2021-08-01 | 29.1g | 351s | randon walk | Rosbag | GT |
hall_02 | 2021-08-08 | 15.0g | 128s | randon walk | Rosbag | GT |
hall_03 | 2021-08-08 | 20.5g | 164s | randon walk | Rosbag | GT |
hall_04 | 2021-08-15 | 17.7g | 181s | randon walk | Rosbag | GT |
hall_05 | 2021-08-15 | 35.1g | 402s | circle | Rosbag | GT |
Figure 6. Room Sequences:under a Motion-capture system with twelve cameras.
Sequence Name | Collection Date | Total Size | Duration | Features | Rosbag | GT |
---|---|---|---|---|---|---|
room_01 | 2021-07-30 | 14.0g | 72s | room,bright | Rosbag | GT |
room_02 | 2021-07-30 | 15.2g | 75s | room,bright | Rosbag | GT |
room_03 | 2021-07-30 | 26.1g | 128s | room,bright | Rosbag | GT |
room_dark_01 | 2021-07-30 | 20.2g | 111s | room,dark | Rosbag | GT |
room_dark_02 | 2021-07-30 | 30.3g | 165s | room,dark | Rosbag | GT |
room_dark_03 | 2021-07-30 | 22.7g | 116s | room,dark | Rosbag | GT |
room_dark_04 | 2021-08-15 | 29.3g | 143s | room,dark | Rosbag | GT |
room_dark_05 | 2021-08-15 | 33.0g | 159s | room,dark | Rosbag | GT |
room_dark_06 | 2021-08-15 | 35.6g | 172s | room,dark | Rosbag | GT |
Figure 7. Door Sequences:A laser scanner track the robot through a door from indoors to outdoors.
Sequence Name | Collection Date | Total Size | Duration | Features | Rosbag | GT |
---|---|---|---|---|---|---|
door_01 | 2021-08-04 | 35.5g | 461s | outdoor to indoor to outdoor,long-term | Rosbag | GT |
door_02 | 2021-08-04 | 10.5g | 127s | outdoor to indoor,short-term | Rosbag | GT |
For convenience of evaluation, we provide configuration files of some well-known SLAM systems as below:
LINS,
- For rosbag users, first make image view
roscd image_view
rosmake image_view
sudo apt-get install mjpegtools
open a terminal,type roscore.And then open another,type
rosrun image_transport republish compressed in:=/camera/color/image_raw raw out:=/camera/color/image_raw respawn="true"
- For non-rosbag users,just take advantage of following script export_tum,export_euroc and get_csv to get data in formats of Tum or EuRoC.
We use open-source tool evo for evalutation. To install evo,type
pip install evo --upgrade --no-binary evo
To evaluate monocular visual SLAM,type
evo_ape tum street_07.txt your_result.txt -vaps
To evaluate LIDAR SLAM,type
evo_ape tum street_07.txt your_result.txt -vap
To test GNSS based methods,type
evo_ape tum street_07.txt your_result.txt -vp
For camera intrinsics,visit Ocamcalib for omnidirectional model. visit Vins-Fusion for pinhole and MEI model. use Opencv for Kannala Brandt model
For IMU intrinsics,visit Imu_utils
For extrinsics between cameras and IMU,visit Kalibr For extrinsics between Lidar and IMU,visit Lidar_IMU_Calib For extrinsics between cameras and Lidar, visit Autoware
For GNSS based methods like RTKLIB,we usually need to get data in the format of RINEX. To make use of GNSS raw measurements, we use Link toolkit.
We write a ROS driver for UVC cameras to record our thermal-infrared image. UVC ROS driver
In the future, we plan to update and extend our project from time to time, striving to build a comprehensive SLAM benchmark similar to the KITTI dataset for ground robots.
If you have any suggestions or questions, do not hesitate to propose an issue. And if you find our dataset helpful in your research, a simple star is the best affirmation for us.
This work is supported by NSFC(62073214). Authors from SJTU hereby express our appreciation.