---------
This work is under review and the repository is fully anonymous.
---------
βΆ Click the image to watch the intro video (opens in a new tab)
---------
- [1] Motivation
- [2] Experimental Setup
- [3] Environments and Sequences
- [4] Targets
- [5] Calibration
- [6] Data
- [7] Acknowledgments
- [8] Citation
- [9] License
- [10] Related Work
Landmines remain a persistent threat in conflict-affected regions, posing risks to civilians and impeding post-war recovery. Traditional demining methods are often slow, hazardous, and costly, necessitating the development of robotic solutions for safer and more efficient landmine detection.
MineInsight is a publicly available multi-spectral dataset designed to support advancements in robotic demining and off-road navigation. It features a diverse collection of sensor data, including visible (RGB, monochrome), short-wave infrared (VIS-SWIR), long-wave infrared (LWIR), and LiDAR scans. The dataset includes dual-view sensor scans from both a UGV and its robotic arm, providing multiple viewpoints to mitigate occlusions and improve detection accuracy.
With over 38,000 RGB frames, 53,000 VIS-SWIR frames, and 108,000 LWIR frames recorded in both daylight and nighttime conditions, featuring 35 different targets distributed along 3 tracks, MineInsight serves as a benchmark for developing and evaluating detection algorithms. It also offers an estimation of object localization, supporting researchers in algorithm validation and performance benchmarking.
MineInsight follows best practices from established robotic datasets and provides a valuable resource for the community to advance research in landmine detection, off-road navigation, and sensor fusion.
This section follows the terminology and conventions outlined in the accompanying paper.
For a more detailed understanding of the methodology and experimental design, please refer to the paper.
| Platform and Robotic Arm | Platform Sensor Suite | Robotic Arm Sensor Suite |
|---|---|---|
| Clearpath Husky A200 UGV Universal Robots UR5e Robotic Arm |
Livox Mid-360 LiDAR Sevensense Core Research Module Microstrain 3DM-GV7-AR IMU |
Teledyne FLIR Boson 640 Alvium 1800 U-130 VSWIR Alvium 1800 U-240 Livox AVIA |
The coordinate systems (and their TF name) of all sensors in our platform are illustrated in the figure below.
Note: The positions of the axis systems in the figure are approximate.
This visualization provides insight into the relative orientations between sensors,
whether in the robotic arm sensor suite or the platform sensor suite.
For the full transformation chain, refer to the following ROS 2 topics in the dataset:
/tf_staticβ Contains static transformations between sensors./tfβ Contains dynamic transformations recorded during operation.
The dataset was collected across 3 distinct tracks, each designed to represent a demining scenario with varying terrain and environmental conditions. These tracks contain a diverse set of targets, positioned to challenge algorithms development. The figures represents a top-view pointcloud distribution of the targets along the track.
For the sake of reproducibility, and to leave the ground-truth autolabelling and improvement as an open challenge, we also release the raw data from the 3 reference sequences (the ones containing the AprilTag).
Please note that these ROS2 bags have not been processed or altered β they are provided exactly as recorded, with no topic remapping applied as in the dataset.
You can download the bags from here:
- TRACK 1 Reference Sequence ROS2 Bag
- TRACK 2 Reference Sequence ROS2 Bag
- TRACK 3 Reference Sequence ROS2 Bag
In addition, we also provide the output of the ground position of each AprilTag stick in the reference frame map, as described in the paper.
These are released as JSON files, allowing users to evaluate the distances between the markers.
You can find them here: reference_sequences/
For each track, a detailed inventory PDF is available, providing the full list of targets along with their respective details.
You can find them in the tracks_inventory folder of this repository:
π Track 1 Inventory Β |Β π Track 2 Inventory Β |Β π Track 3 Inventory
Each PDF catalogs each item with:
- ID: Unique identifier for each target;
- Name: Official name of the target;
- Image: A visual reference of the object for recognition;
- CAT-UXO link: Detailed explanation of the target (available only for landmines).
The dataset includes intrinsic and extrinsic calibration files for all cameras and LiDARs.
intrinsics_calibration/
lwir_camera_intrinsics.yamlβ LWIR camerargb_camera_intrinsics.yamlβ RGB camerasevensense_cameras_intrinsics.yamlβ Sevensense grayscale camerasswir_camera_intrinsics.yamlβ VIS-SWIR camera
extrinsics_calibration/
lwir_avia_extrinsics.yamlβ LWIR β Livox AVIArgb_avia_extrinsics.yamlβ RGB β Livox AVIAsevensense_mid360_extrinsics.yamlβ Sevensense β Livox Mid-360swir_avia_extrinsics.yamlβ VIS-SWIR β Livox AVIA
Note:
Intrinsic parameters are also included in the extrinsics calibration files, as they were evaluated using raw camera images.
We release 2 sequences per track, resulting in a total of 6 sequences.
The data is available in three different formats:
- π ROS 2 Bags
- π ROS 2 Bags with Livox Custom Msg
- πΌ Raw Images
Each ROS 2 Bag, includes:
Click here to view all the topics with a detailed explaination
| Topic | Message Type | Description |
|---|---|---|
| /allied_swir/image_raw/compressed | sensor_msgs/msg/CompressedImage | SWIR camera raw image |
| /allied_swir/image_raw/rectified/compressed | sensor_msgs/msg/CompressedImage | SWIR camera rectified image |
| /alphasense/cam_0/image_raw/compressed | sensor_msgs/msg/CompressedImage | Sevensense Core Greyscale camera 0 raw image |
| /alphasense/cam_0/image_raw/rectified/compressed | sensor_msgs/msg/CompressedImage | Sevensense Core Greyscale camera 0 rectified image |
| /alphasense/cam_1/image_raw/compressed | sensor_msgs/msg/CompressedImage | Sevensense Core Greyscale camera 1 raw image |
| /alphasense/cam_1/image_raw/rectified/compressed | sensor_msgs/msg/CompressedImage | Sevensense Core Greyscale camera 1 rectified image |
| /alphasense/cam_2/image_raw/compressed | sensor_msgs/msg/CompressedImage | Sevensense Core Greyscale camera 2 raw image |
| /alphasense/cam_2/image_raw/rectified/compressed | sensor_msgs/msg/CompressedImage | Sevensense Core Greyscale camera 2 rectified image |
| /alphasense/cam_3/image_raw/compressed | sensor_msgs/msg/CompressedImage | Sevensense Core Greyscale camera 3 raw image |
| /alphasense/cam_3/image_raw/rectified/compressed | sensor_msgs/msg/CompressedImage | Sevensense Core Greyscale camera 3 rectified image |
| /alphasense/cam_4/image_raw/compressed | sensor_msgs/msg/CompressedImage | Sevensense Core Greyscale camera 4 raw image |
| /alphasense/cam_4/image_raw/rectified/compressed | sensor_msgs/msg/CompressedImage | Sevensense Core Greyscale camera 4 rectified image |
| /alphasense/imu | sensor_msgs/msg/Imu | IMU data from Sevensense Core |
| /avia/livox/imu | sensor_msgs/msg/Imu | IMU data from Livox AVIA LiDAR |
| /avia/livox/lidar/pointcloud2 | sensor_msgs/msg/PointCloud2 | Point cloud data from Livox AVIA LiDAR |
| /flir/thermal/compressed | sensor_msgs/msg/CompressedImage | LWIR camera raw image |
| /flir/thermal/rectified/compressed | sensor_msgs/msg/CompressedImage | LWIR camera rectified image |
| /flir/thermal/colorized/compressed | sensor_msgs/msg/CompressedImage | LWIR camera raw image with colorized overlay |
| /flir/thermal/rectified/colorized/compressed | sensor_msgs/msg/CompressedImage | LWIR camera rectified image with colorized overlay |
| /microstrain/imu | sensor_msgs/msg/Imu | IMU data from Microstrain (internal) |
| /mid360/livox/imu | sensor_msgs/msg/Imu | IMU data from Livox Mid-360 LiDAR |
| /mid360/livox/lidar/pointcloud2 | sensor_msgs/msg/PointCloud2 | Point cloud data from Livox Mid-360 LiDAR |
| /odometry/filtered | nav_msgs/msg/Odometry | Filtered odometry data (ROS 2 localization, fusion output ) |
| /odometry/wheel | nav_msgs/msg/Odometry | Wheel odometry data from UGV wheel encoder |
| /tf | tf2_msgs/msg/TFMessage | Real-time transformations between coordinate frames |
| /tf_static | tf2_msgs/msg/TFMessage | Static transformations |
If you are downloading a ROS 2 Bag with Livox Custom Msg, you will find the following additional topics:
| Topic | Message Type | Description |
|---|---|---|
| /avia/livox/lidar | livox_interfaces/msg/CustomMsg | Raw point cloud data from Livox AVIA LiDAR in custom Livox format |
| /mid360/livox/lidar | livox_ros_driver2/msg/CustomMsg | Raw point cloud data from Livox Mid-360 LiDAR in custom Livox format |
Note:
These messages include timestamps for each point in the point cloud scan.
To correctly decode and use these messages, install the official Livox drivers:
- Livox AVIA (π livox_ros2_driver)
- Livox Mid-360 (π livox_ros_driver2)
For installation instructions, refer to the documentation in the respective repositories.
You can download the datasets from the links below:
πΉ Sequence 1:
- ποΈ ROS 2 Bag (Standard) [19.1 GB]
- ποΈ ROS 2 Bag (with Livox Custom Msg) [19.6 GB]
πΉ Sequence 2:
- ποΈ ROS 2 Bag (Standard) [75.3 GB]
- ποΈ ROS 2 Bag (with Livox Custom Msg) [77.9 GB]
πΉ Sequence 1:
- ποΈ ROS 2 Bag (Standard) [15.1 GB]
- ποΈ ROS 2 Bag (with Livox Custom Msg) [15.5 GB]
πΉ Sequence 2:
- ποΈ ROS 2 Bag (Standard) [68.9 GB]
- ποΈ ROS 2 Bag (with Livox Custom Msg) [71 GB]
πΉ Sequence 1:
- ποΈ ROS 2 Bag (Standard) [5.5 GB]
- ποΈ ROS 2 Bag (with Livox Custom Msg) [5.9 GB]
πΉ Sequence 2:
- ποΈ ROS 2 Bag (Standard) [24.4 GB]
- ποΈ ROS 2 Bag (with Livox Custom Msg) [26 GB]
Each archive contains images + 2D bounding box annotations (YOLOv8). After unzipping youβll get:
| Track / Seq | RGB | VIS-SWIR | LWIR |
|---|---|---|---|
| Track 1 - Seq 1 | track_1_s1_rgb [1.5 GB] | track_1_s1_swir [465.4 MB] | track_1_s1_lwir [649.7 MB] |
| Track 1 - Seq 2 | track_1_s2_rgb [5 GB] | track_1_s2_swir [1.5 GB] | track_1_s2_lwir [2.9 GB] |
| Track 2 - Seq 1 | track_2_s1_rgb [1.1 GB] | track_2_s1_swir [332.2 MB] | track_2_s1_lwir [507.8 MB] |
| Track 2 - Seq 2 | track_2_s2_rgb [6.1 GB] | track_2_s2_swir [1.1 GB] | track_2_s2_lwir [2.1 GB] |
| Track 3 - Seq 1 | β |
track_3_s1_swir [182.7 MB] | track_3_s1_lwir [1.1 GB] |
| Track 3 - Seq 2 | β |
track_3_s2_swir [852.1 MB] | track_3_s2_lwir [1.9 GB] |
Each folder (.zip) follows the naming convention:
track_(nt)_s(ns)_camera.zip
Where:
- (nt) β Track number (1, 2, 3)
- (ns) β Sequence number (1, 2)
- camera β Image type (rgb, swir, or lwir)
The generic naming convention for each jpg/txt is:
track_(nt)_s(ns)_camera_timestampsec_timestampnanosec (.jpg / .txt)
The YOLOv8 format is used for annotations of the targets position in the .txt files.
<class_id> <x_center> <y_center> <width> <height>
Classes list: tracks_inventory/targets_list.yaml
We provide the climatology data for the two key days surrounding the test campaign:
π Climatology 29 & 30 Oct 2024.xlsx
29 October 2024 β the day before the campaign, when targets were placed on the soil at around 09:00 AM local time. 30 October 2024 β the day of the campaign, when sensor measurements were conducted.
The full Excel file contains minute-by-minute measurements collected across both days. These measurements are useful for processing the thermal camera data, as they allow correlation between atmospheric and surface conditions and thermal imaging performance.
The following parameters are available in the dataset (in the order of the Excel file):
| Parameter | Unit |
|---|---|
| Time | HH:MM:SS |
| Wind force (10 m) | kt |
| Wind gusts | kt |
| Wind direction | Β° (deg) |
| Air temperature | Β°C |
| T β5 cm (soil) | Β°C |
| T β10 cm (soil) | Β°C |
| T β20 cm (soil) | Β°C |
| T β50 cm (soil) | Β°C |
| Road surface temperature | Β°C |
| Grass surface temperature | Β°C |
| Dew point temperature | Β°C |
| Relative Humidity (HR) | % |
| Pressure | hPa |
| Clouds (octas @ height @ type) | β |
| Total clouds | octas |
| Precipitation quantity (1 min) | mm |
| Precipitation quantity (1 hour) | mm |
| Precipitation quantity (1 day) | mm |
To facilitate analysis, the table below shows the exact climatology time windows corresponding to each recorded sequence.
All times refer to 30 October 2024 (campaign day).
| Track | Sequence | Bag file start time (local) | Duration | Climatology window |
|---|---|---|---|---|
| 1 | Seq 1 | 13:17:49 | 4 min 12 s | 13:17:49 β 13:22:01 |
| 1 | Seq 2 | 13:54:26 | 19 min 58 s | 13:54:26 β 14:14:24 |
| 2 | Seq 1 | 15:16:35 | 3 min 42.8 s | 15:16:35 β 15:20:17 |
| 2 | Seq 2 | 15:47:05 | 14 min 46 s | 15:47:05 β 16:01:51 |
| 3 | Seq 1 | 17:42:19 | 3 min 41.5 s | 17:42:19 β 17:46:00 |
| 3 | Seq 2 | 17:28:07 | 13 min 18 s | 17:28:07 β 17:41:25 |
By aligning the timestamps of each ROS 2 bag with this climatology log, users can extract the environmental conditions (temperature, humidity, wind, etc.) at the exact moment of each recording.
The figure below shows the air and soil temperatures (β5 cm, β10 cm, β20 cm, β50 cm) throughout the campaign day (30 October 2024).
Red shaded regions correspond to the time windows when each track sequence was recorded.
During Track 3 recordings (30 October 2024), the RGB camera experienced a progressive failure.
- The first part of the recording (starting at 17:28:07 and 17:42:19, see Climatology section) already shows frames that would have been very dark, making it extremely difficult to detect any target or terrain details.
- By the end of the sequences, the RGB feed would have been completely black given the near-nighttime conditions.
- This issue affects both Sequence 1 (3 min 41.5 s) and Sequence 2 (13 min 18 s).
We recovered the bag metadata and extracted a short video from the RGB camera illustrating the Track 3 illumination condition at the beginning of the recordings:
The authors thank Person 1 and Person 2 for their support in the hardware and software design.
They also thank Person 3 and Person 4 for their assistance in organizing the measurement campaign.
They also thank Organization 1 for providing the climatology study during the days of the test campaign.
If you use MineInsight in your own work, please cite the accompanying paper:
TEMPORARY REMOVED -> Anonymized repository
This work is licensed under the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0).
You are free to share and adapt this work for non-commercial purposes, as long as you credit the authors and apply the same license to any derivative works.
For full details, see:
CC BY-NC-SA 4.0 License





