Skip to content

The official benchmark codebase for the paper "V2X-Radar: A Multi-modal Dataset with 4D Radar for Cooperative Perception"

License

Notifications You must be signed in to change notification settings

yanglei18/V2X-Radar

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

3 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

V2X-Radar: A Multi-modal Dataset with 4D Radar for Cooperative Perception

Lei Yang · Xinyu Zhang · Chen Wang · Jun Li · Jiaqi Ma · Zhiying Song · Tong Zhao · Ziying Song · Li Wang · Mo Zhou · Yang Shen · Kai Wu · Chen Lv

Logo


website paper Docker

This is the official implementation of "V2X-Radar: A Multi-modal Dataset with 4D Radar for Cooperative Perception".

Supported by the THU OpenMDP Lab.

Overview

CodeBase Features

Data Download

Please check our website to download the data (OPV2V / KITTI format).

After downloading the data, please put the data in the following structure:

V2X-Radar
├── data
│   ├── v2x-radar
│   │   ├── mini
│   │   │   ├── v2x-radar-i   # KITTI Format
│   │   │   │   ├── training
│   │   │   │   │   ├── velodyne
│   │   │   │   │   ├── radar # transformed on the LiDAR frame
│   │   │   │   │   ├── calib
│   │   │   │   │   ├── image_1
│   │   │   │   │   ├── image_2
│   │   │   │   │   ├── image_3
│   │   │   │   │   ├── label_2
│   │   │   │   ├── ImageSets
│   │   │   │   │   ├── train.txt
│   │   │   │   │   ├── val.txt
│   │   │   ├── v2x-radar-v   # KITTI Format
│   │   │   │   ├── training
│   │   │   │   │   ├── velodyne
│   │   │   │   │   ├── radar # transformed on the LiDAR frame
│   │   │   │   │   ├── calib
│   │   │   │   │   ├── image_2
│   │   │   │   │   ├── label_2
│   │   │   │   ├── ImageSets
│   │   │   │   │   ├── train.txt
│   │   │   │   │   ├── val.txt
│   │   │   ├── v2x-radar-c  # OpenV2V Format
│   │   │   │   ├── train
│   │   │   │   │   ├── 2024-05-15-16-28-09
│   │   │   │   │   │   ├── -1  # RoadSide
│   │   │   │   │   │   │   ├── 00000.pcd - 00250.pcd # LiDAR point clouds from timestamp 0 to 250
│   │   │   │   │   │   │   ├── 00000_radar.pcd - 00250_radar.pcd # the 4D Radar point clouds data transformed on the LiDAR frame from timestamp 0 to 250
│   │   │   │   │   │   │   ├── 00000.yaml - 00250.yaml # metadata for each timestamp
│   │   │   │   │   │   │   ├── 00000_camera0.jpg - 00250_camera0.jpg # left camera images
│   │   │   │   │   │   │   ├── 00000_camera1.jpg - 00250_camera1.jpg # front camera images
│   │   │   │   │   │   │   ├── 00000_camera2.jpg - 00250_camera2.jpg # right camera images
│   │   │   │   │   │   ├── 142 # Vehicle Side
│   │   │   │   ├── validate
│   │   │   │   ├── test
│   │   ├── trainval-full  # release soon
│   │   ├── ...
│   ├── other datasets

Changelog

  • The trainval-full dataset will released soon.
  • Mar. 18, 2025: The mini sample data is released.
  • Mar. 15, 2025: Tha paper and supplementary is released.
  • Mar. 14, 2025: The codebase is released.
  • Nov. 7, 2024: Tha paper is released.

Quick Start

Cooperative Perception

Please refer to CodeBase/BEVHeight.

Single-agent Perception

Please refer to CodeBase/OpenCOOD.

Acknowledgment

This project is not possible without the following codebases.

Citation

@article{yang2024v2x,
  title={V2X-Radar: A Multi-modal Dataset with 4D Radar for Cooperative Perception},
  author={Yang, Lei and Zhang, Xinyu and Wang, Chen and Li, Jun and Ma, Jiaqi and Song, Zhiying and Zhao, Tong and Song, Ziying and Wang, Li and Zhou, Mo and Shen, Yang and Lv, Chen},
  journal={arXiv preprint arXiv:2411.10962},
  year={2024}
}

About

The official benchmark codebase for the paper "V2X-Radar: A Multi-modal Dataset with 4D Radar for Cooperative Perception"

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published