Skip to content

MineInsight: A Multi-spectral Dataset for Humanitarian Demining Robotics in Off-Road Environments

Notifications You must be signed in to change notification settings

mariomlz99/MineInsight

Repository files navigation

MineInsight: A Multi-sensor Dataset for Humanitarian Demining Robotics in Off-Road Environments

---------

This work is under review and the repository is fully anonymous.

---------

β–Ά Watch the MineInsight intro video

β–Ά Click the image to watch the intro video (opens in a new tab)

---------

Repository Index


[1] Motivation

Landmines remain a persistent threat in conflict-affected regions, posing risks to civilians and impeding post-war recovery. Traditional demining methods are often slow, hazardous, and costly, necessitating the development of robotic solutions for safer and more efficient landmine detection.

MineInsight is a publicly available multi-spectral dataset designed to support advancements in robotic demining and off-road navigation. It features a diverse collection of sensor data, including visible (RGB, monochrome), short-wave infrared (VIS-SWIR), long-wave infrared (LWIR), and LiDAR scans. The dataset includes dual-view sensor scans from both a UGV and its robotic arm, providing multiple viewpoints to mitigate occlusions and improve detection accuracy.

With over 38,000 RGB frames, 53,000 VIS-SWIR frames, and 108,000 LWIR frames recorded in both daylight and nighttime conditions, featuring 35 different targets distributed along 3 tracks, MineInsight serves as a benchmark for developing and evaluating detection algorithms. It also offers an estimation of object localization, supporting researchers in algorithm validation and performance benchmarking.

MineInsight follows best practices from established robotic datasets and provides a valuable resource for the community to advance research in landmine detection, off-road navigation, and sensor fusion.


dataset_presentation_pic

[2] Experimental Setup

This section follows the terminology and conventions outlined in the accompanying paper.
For a more detailed understanding of the methodology and experimental design, please refer to the paper.

Sensors Overview

Experimental Setup

Platform and Robotic Arm Platform Sensor Suite Robotic Arm Sensor Suite
Clearpath Husky A200 UGV
Universal Robots UR5e Robotic Arm
Livox Mid-360 LiDAR
Sevensense Core Research Module
Microstrain 3DM-GV7-AR IMU
Teledyne FLIR Boson 640
Alvium 1800 U-130 VSWIR
Alvium 1800 U-240
Livox AVIA

Sensors Coordinate Systems

The coordinate systems (and their TF name) of all sensors in our platform are illustrated in the figure below.

Note: The positions of the axis systems in the figure are approximate.
This visualization provides insight into the relative orientations between sensors,
whether in the robotic arm sensor suite or the platform sensor suite.

For the full transformation chain, refer to the following ROS 2 topics in the dataset:

  • /tf_static β†’ Contains static transformations between sensors.
  • /tf β†’ Contains dynamic transformations recorded during operation.

tf_sens

[3] Environments and Sequences

The dataset was collected across 3 distinct tracks, each designed to represent a demining scenario with varying terrain and environmental conditions. These tracks contain a diverse set of targets, positioned to challenge algorithms development. The figures represents a top-view pointcloud distribution of the targets along the track.

dataset_tracks_presentation

For the sake of reproducibility, and to leave the ground-truth autolabelling and improvement as an open challenge, we also release the raw data from the 3 reference sequences (the ones containing the AprilTag).

Please note that these ROS2 bags have not been processed or altered β€” they are provided exactly as recorded, with no topic remapping applied as in the dataset.

You can download the bags from here:

In addition, we also provide the output of the ground position of each AprilTag stick in the reference frame map, as described in the paper.
These are released as JSON files, allowing users to evaluate the distances between the markers.

You can find them here: reference_sequences/

[4] Targets

For each track, a detailed inventory PDF is available, providing the full list of targets along with their respective details.
You can find them in the tracks_inventory folder of this repository:

πŸ“„ Track 1 Inventory Β |Β  πŸ“„ Track 2 Inventory Β |Β  πŸ“„ Track 3 Inventory

Each PDF catalogs each item with:

  • ID: Unique identifier for each target;
  • Name: Official name of the target;
  • Image: A visual reference of the object for recognition;
  • CAT-UXO link: Detailed explanation of the target (available only for landmines).

[5] Calibration

The dataset includes intrinsic and extrinsic calibration files for all cameras and LiDARs.

Intrinsic Calibration

intrinsics_calibration/

  • lwir_camera_intrinsics.yaml β†’ LWIR camera
  • rgb_camera_intrinsics.yaml β†’ RGB camera
  • sevensense_cameras_intrinsics.yaml β†’ Sevensense grayscale cameras
  • swir_camera_intrinsics.yaml β†’ VIS-SWIR camera

Extrinsic Calibration

extrinsics_calibration/

  • lwir_avia_extrinsics.yaml β†’ LWIR ↔ Livox AVIA
  • rgb_avia_extrinsics.yaml β†’ RGB ↔ Livox AVIA
  • sevensense_mid360_extrinsics.yaml β†’ Sevensense ↔ Livox Mid-360
  • swir_avia_extrinsics.yaml β†’ VIS-SWIR ↔ Livox AVIA

Note:
Intrinsic parameters are also included in the extrinsics calibration files, as they were evaluated using raw camera images.

[6] Data

We release 2 sequences per track, resulting in a total of 6 sequences.
The data is available in three different formats:

  • πŸ—„ ROS 2 Bags
  • πŸ—„ ROS 2 Bags with Livox Custom Msg
  • πŸ–Ό Raw Images

ROS 2 Bags Structure

Each ROS 2 Bag, includes:

Click here to view all the topics with a detailed explaination
Topic Message Type Description
/allied_swir/image_raw/compressed sensor_msgs/msg/CompressedImage SWIR camera raw image
/allied_swir/image_raw/rectified/compressed sensor_msgs/msg/CompressedImage SWIR camera rectified image
/alphasense/cam_0/image_raw/compressed sensor_msgs/msg/CompressedImage Sevensense Core Greyscale camera 0 raw image
/alphasense/cam_0/image_raw/rectified/compressed sensor_msgs/msg/CompressedImage Sevensense Core Greyscale camera 0 rectified image
/alphasense/cam_1/image_raw/compressed sensor_msgs/msg/CompressedImage Sevensense Core Greyscale camera 1 raw image
/alphasense/cam_1/image_raw/rectified/compressed sensor_msgs/msg/CompressedImage Sevensense Core Greyscale camera 1 rectified image
/alphasense/cam_2/image_raw/compressed sensor_msgs/msg/CompressedImage Sevensense Core Greyscale camera 2 raw image
/alphasense/cam_2/image_raw/rectified/compressed sensor_msgs/msg/CompressedImage Sevensense Core Greyscale camera 2 rectified image
/alphasense/cam_3/image_raw/compressed sensor_msgs/msg/CompressedImage Sevensense Core Greyscale camera 3 raw image
/alphasense/cam_3/image_raw/rectified/compressed sensor_msgs/msg/CompressedImage Sevensense Core Greyscale camera 3 rectified image
/alphasense/cam_4/image_raw/compressed sensor_msgs/msg/CompressedImage Sevensense Core Greyscale camera 4 raw image
/alphasense/cam_4/image_raw/rectified/compressed sensor_msgs/msg/CompressedImage Sevensense Core Greyscale camera 4 rectified image
/alphasense/imu sensor_msgs/msg/Imu IMU data from Sevensense Core
/avia/livox/imu sensor_msgs/msg/Imu IMU data from Livox AVIA LiDAR
/avia/livox/lidar/pointcloud2 sensor_msgs/msg/PointCloud2 Point cloud data from Livox AVIA LiDAR
/flir/thermal/compressed sensor_msgs/msg/CompressedImage LWIR camera raw image
/flir/thermal/rectified/compressed sensor_msgs/msg/CompressedImage LWIR camera rectified image
/flir/thermal/colorized/compressed sensor_msgs/msg/CompressedImage LWIR camera raw image with colorized overlay
/flir/thermal/rectified/colorized/compressed sensor_msgs/msg/CompressedImage LWIR camera rectified image with colorized overlay
/microstrain/imu sensor_msgs/msg/Imu IMU data from Microstrain (internal)
/mid360/livox/imu sensor_msgs/msg/Imu IMU data from Livox Mid-360 LiDAR
/mid360/livox/lidar/pointcloud2 sensor_msgs/msg/PointCloud2 Point cloud data from Livox Mid-360 LiDAR
/odometry/filtered nav_msgs/msg/Odometry Filtered odometry data (ROS 2 localization, fusion output )
/odometry/wheel nav_msgs/msg/Odometry Wheel odometry data from UGV wheel encoder
/tf tf2_msgs/msg/TFMessage Real-time transformations between coordinate frames
/tf_static tf2_msgs/msg/TFMessage Static transformations

If you are downloading a ROS 2 Bag with Livox Custom Msg, you will find the following additional topics:

Topic Message Type Description
/avia/livox/lidar livox_interfaces/msg/CustomMsg Raw point cloud data from Livox AVIA LiDAR in custom Livox format
/mid360/livox/lidar livox_ros_driver2/msg/CustomMsg Raw point cloud data from Livox Mid-360 LiDAR in custom Livox format

Note: These messages include timestamps for each point in the point cloud scan.
To correctly decode and use these messages, install the official Livox drivers:

For installation instructions, refer to the documentation in the respective repositories.

ROS 2 Bags Downloads

You can download the datasets from the links below:

Track 1

πŸ”Ή Sequence 1:

πŸ”Ή Sequence 2:

Track 2

πŸ”Ή Sequence 1:

πŸ”Ή Sequence 2:

Track 3

πŸ”Ή Sequence 1:

πŸ”Ή Sequence 2:

Raw Images

Each archive contains images + 2D bounding box annotations (YOLOv8). After unzipping you’ll get:

Track / Seq RGB VIS-SWIR LWIR
Track 1 - Seq 1 track_1_s1_rgb [1.5 GB] track_1_s1_swir [465.4 MB] track_1_s1_lwir [649.7 MB]
Track 1 - Seq 2 track_1_s2_rgb [5 GB] track_1_s2_swir [1.5 GB] track_1_s2_lwir [2.9 GB]
Track 2 - Seq 1 track_2_s1_rgb [1.1 GB] track_2_s1_swir [332.2 MB] track_2_s1_lwir [507.8 MB]
Track 2 - Seq 2 track_2_s2_rgb [6.1 GB] track_2_s2_swir [1.1 GB] track_2_s2_lwir [2.1 GB]
Track 3 - Seq 1

❌

track_3_s1_swir [182.7 MB] track_3_s1_lwir [1.1 GB]
Track 3 - Seq 2

❌

track_3_s2_swir [852.1 MB] track_3_s2_lwir [1.9 GB]

Each folder (.zip) follows the naming convention:

track_(nt)_s(ns)_camera.zip

Where:

  • (nt) β†’ Track number (1, 2, 3)
  • (ns) β†’ Sequence number (1, 2)
  • camera β†’ Image type (rgb, swir, or lwir)

The generic naming convention for each jpg/txt is:

track_(nt)_s(ns)_camera_timestampsec_timestampnanosec (.jpg / .txt)

The YOLOv8 format is used for annotations of the targets position in the .txt files.

<class_id> <x_center> <y_center> <width> <height>

Classes list: tracks_inventory/targets_list.yaml

Climatology

We provide the climatology data for the two key days surrounding the test campaign:

πŸ“„ Climatology 29 & 30 Oct 2024.xlsx

29 October 2024 β†’ the day before the campaign, when targets were placed on the soil at around 09:00 AM local time. 30 October 2024 β†’ the day of the campaign, when sensor measurements were conducted.

The full Excel file contains minute-by-minute measurements collected across both days. These measurements are useful for processing the thermal camera data, as they allow correlation between atmospheric and surface conditions and thermal imaging performance.

Parameters Provided

The following parameters are available in the dataset (in the order of the Excel file):

Parameter Unit
Time HH:MM:SS
Wind force (10 m) kt
Wind gusts kt
Wind direction Β° (deg)
Air temperature Β°C
T –5 cm (soil) Β°C
T –10 cm (soil) Β°C
T –20 cm (soil) Β°C
T –50 cm (soil) Β°C
Road surface temperature Β°C
Grass surface temperature Β°C
Dew point temperature Β°C
Relative Humidity (HR) %
Pressure hPa
Clouds (octas @ height @ type) –
Total clouds octas
Precipitation quantity (1 min) mm
Precipitation quantity (1 hour) mm
Precipitation quantity (1 day) mm

How to Use with Sequences

To facilitate analysis, the table below shows the exact climatology time windows corresponding to each recorded sequence.
All times refer to 30 October 2024 (campaign day).

Track Sequence Bag file start time (local) Duration Climatology window
1 Seq 1 13:17:49 4 min 12 s 13:17:49 β†’ 13:22:01
1 Seq 2 13:54:26 19 min 58 s 13:54:26 β†’ 14:14:24
2 Seq 1 15:16:35 3 min 42.8 s 15:16:35 β†’ 15:20:17
2 Seq 2 15:47:05 14 min 46 s 15:47:05 β†’ 16:01:51
3 Seq 1 17:42:19 3 min 41.5 s 17:42:19 β†’ 17:46:00
3 Seq 2 17:28:07 13 min 18 s 17:28:07 β†’ 17:41:25

By aligning the timestamps of each ROS 2 bag with this climatology log, users can extract the environmental conditions (temperature, humidity, wind, etc.) at the exact moment of each recording.

Temperature Profiles (30 October 2024)

The figure below shows the air and soil temperatures (–5 cm, –10 cm, –20 cm, –50 cm) throughout the campaign day (30 October 2024).
Red shaded regions correspond to the time windows when each track sequence was recorded.

Temperature Profiles 30 Oct 2024

Track 3 RGB Camera Failure

During Track 3 recordings (30 October 2024), the RGB camera experienced a progressive failure.

  • The first part of the recording (starting at 17:28:07 and 17:42:19, see Climatology section) already shows frames that would have been very dark, making it extremely difficult to detect any target or terrain details.
  • By the end of the sequences, the RGB feed would have been completely black given the near-nighttime conditions.
  • This issue affects both Sequence 1 (3 min 41.5 s) and Sequence 2 (13 min 18 s).

We recovered the bag metadata and extracted a short video from the RGB camera illustrating the Track 3 illumination condition at the beginning of the recordings:

[7] Acknowledgments

The authors thank Person 1 and Person 2 for their support in the hardware and software design.
They also thank Person 3 and Person 4 for their assistance in organizing the measurement campaign. They also thank Organization 1 for providing the climatology study during the days of the test campaign.

[8] Citation

If you use MineInsight in your own work, please cite the accompanying paper:

TEMPORARY REMOVED -> Anonymized repository

[9] License

This work is licensed under the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0).
You are free to share and adapt this work for non-commercial purposes, as long as you credit the authors and apply the same license to any derivative works.

For full details, see:
CC BY-NC-SA 4.0 License

[10] Related Work

About

MineInsight: A Multi-spectral Dataset for Humanitarian Demining Robotics in Off-Road Environments

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published