TL;DR

The SemanticSpray dataset contains scenes in wet surface conditions captured by Camera, LiDAR and Radar.
The following label types are provided:
  • Camera: 2D Boxes
  • LiDAR: 3D Boxes, Semantic Labels
  • Radar: Semantic Labels
To download the dataset, follow these instructions.
Teaser Image

News

  • 2024-04-21: The SemanticSpray++ dataset is accepted at IV2024. Here 2D camera boxes, 3D LiDAR boxes and radar semantic labels are additionally provided (Arxiv).
  • 2023-07-01: The SemanticSpray dataset is released as part of our RA-L / ICRA-2024 paper, providing semantic labels for the LiDAR point cloud (Arxiv).

Abstract

Autonomous vehicles rely on camera, LiDAR, and radar sensors to navigate the environment. Adverse weather conditions like snow, rain, and fog are known to be problematic for both camera and LiDAR-based perception systems. Currently, it is difficult to evaluate the performance of these methods due to the lack of publicly available datasets containing multimodal labeled data. To address this limitation, we propose the SemanticSpray++ dataset, which provides labels for camera, LiDAR, and radar data of highway-like scenarios in wet surface conditions. In particular, we provide 2D bounding boxes for the camera image, 3D bounding boxes for the LiDAR point cloud, and semantic labels for the radar targets. By labeling all three sensor modalities, the SemanticSpray++ dataset offers a comprehensive test bed for analyzing the performance of different perception methods when vehicles travel on wet surface conditions.

Getting Started

  • An automatic download script is provided:
    • git clone https://github.com/uulm-mrm/semantic_spray_dataset.git
    • bash download.sh
  • More details are provided on GitHub, including instructions for the manual download.
  • Once downloaded, the data should look like this:
  • 
    ├── data
    │   ├── Crafter_dynamic
    │   │   ├── 0000_2021-09-08-14-36-56_0
    │   │   │   ├── image_2
    │   │   │   │   ├── 000000.jpg
    │   │   │   │   └── ....
    │   │   │   ├── delphi_radar
    │   │   │   │   ├── 000000.bin
    │   │   │   │   └── ....
    │   │   │   ├── ibeo_front
    │   │   │   │   ├── 000000.bin
    │   │   │   │   └── ....
    │   │   │   ├── ibeo_rear
    │   │   │   │   ├── 000000.bin
    │   │   │   │   └── ....
    │   │   │   ├── labels
    │   │   │   │   ├── 000000.label
    │   │   │   │   └── ....
    │   │   │   ├── radar_labels
    │   │   │   │   ├── 000000.npy
    │   │   │   │   └── ....
    │   │   │   ├── object_labels
    │   │   │   │   ├── camera
    │   │   │   │   │   ├── 000000.json
    │   │   │   │   │   └── ....
    │   │   │   │   ├── lidar
    │   │   │   │   │   ├── 000000.json
    │   │   │   │   │   └── ....
    │   │   │   ├── velodyne
    │   │   │   │   ├── 000000.bin
    │   │   │   │   └── ....
    │   │   │   ├── poses.txt
    │   │   │   ├── metadata.txt
    │   ├── Golf_dynamic
    │   ...
    ├── ImageSets
    │   ├── test.txt
    │   └── train.txt
    ├── ImageSets++
    │   └── test.txt
    └── README.txt
                  

Exploring The Data

Sensor Setup

The sensor setup used for the recordings is the following:

  • 1 Front Camera
  • 1 Velodyne VLP32C LiDAR (top mounted high-resolution LiDAR)
  • 2 Ibeo LUX 2010 LiDAR (front and rear mounted, low-resolution LiDAR)
  • 1 Aptiv ESR 2.5 Radar

Sensor Data

For each scene, each sensor data can be found in their respective folders:

Raw Data

  • [Camera Image] in the folder "image_2"
  • [VLP32C LiDAR] in the folder "velodyne"
  • [Ibeo LUX 2010 LiDAR front] in the folder "ibeo_front"
  • [Ibeo LUX 2010 LiDAR rear] in the folder "ibeo_rear"
  • [Aptiv ESR 2.5 Radar] in the folder "delphi_radar"

Labels

  • [Semantic Labels for VLP32C LiDAR] in the folder "labels"
  • [Semantic Labels for Radar] in the folder "radar_labels"
  • [3D Object Labels for VLP32C LiDAR] in the folder "object_labels/lidar"
  • [2D Object Labels for Camera] in the folder "object_labels/camera"

Other

  • The ego vehicle poses are located in the file "poses.txt". The convention used by the SemanticKITTI dataset is followed.
  • Additional information on the scene setup (e.g., ego_velocity) are given in the "metadata.txt" file.

In the following GitHub repository, we provide python code for loading and visualizing the dataset.

Related Work

Label-Efficient Semantic Segmentation of LiDAR Point Clouds in Adverse Weather Conditions
Our approach for label efficient semantic segmentation can learn to segment point clouds in adverse weather using only few labeled scans (e.g., 1, 5, 10).
Project Page / Arxiv / Video

Teaser Image

Energy-based Detection of Adverse Weather Effects in LiDAR Data
Our method can robustly detect adverse weather conditions like rain spray, rainfall, snow, and fog in LiDAR point clouds. Additionally, it achieves state-of-the-art results in the detection of weather effects unseen during training.
Project Page / Arxiv / Video

Teaser Image

Towards Robust 3D Object Detection In Rainy Conditions
In this work, we explore the use of noise filtering and LiDAR / Radar sensor fusion to improve 3D object detection robustness in adverse weather.
Project Page / Arxiv / Video

Teaser Image

Citation

SemanticSpray Dataset

@article{10143263,
  author  = {Piroli, Aldi and Dallabetta, Vinzenz and Kopp, Johannes and Walessa, Marc and Meissner, Daniel and Dietmayer, Klaus},
  journal = {IEEE Robotics and Automation Letters},
  title   = {Energy-Based Detection of Adverse Weather Effects in LiDAR Data},
  year    = {2023},
  volume  = {8},
  number  = {7},
  pages   = {4322-4329},
  doi     = {10.1109/LRA.2023.3282382}
}
          
Road Spray Dataset

@misc{https://tudatalib.ulb.tu-darmstadt.de/handle/tudatalib/3537,
  url       = { https://tudatalib.ulb.tu-darmstadt.de/handle/tudatalib/3537 },
  author    = { Linnhoff, Clemens and Elster, Lukas and Rosenberger, Philipp and Winner, Hermann },
  doi       = { 10.48328/tudatalib-930 },
  keywords  = { Automated Driving, Lidar, Radar, Spray, Weather, Perception, Simulation, 407-04 Verkehrs- und Transportsysteme, Intelligenter und automatisierter Verkehr, 380 },
  publisher = { Technical University of Darmstadt },
  year      = { 2022-04 },
  copyright = { Creative Commons Attribution 4.0 },
  title     = { Road Spray in Lidar and Radar Data for Individual Moving Objects }
}