DurLAR: A High-fidelity 128-Channel LiDAR Dataset with Panoramic Ambientand Reflectivity Imagery for Multi-Modal Autonomous Driving Applications

Li Li, Khalid N. Ismail, Hubert P. H. Shum and Toby P. Breckon
Proceedings of the 2021 International Conference on 3D Vision (3DV), 2021

 H5-Index: 51# Citation: 17#

DurLAR: A High-fidelity 128-Channel LiDAR Dataset with Panoramic Ambientand Reflectivity Imagery for Multi-Modal Autonomous Driving Applications
# According to Google Scholar 2024

Abstract

We present DurLAR, a high-fidelity 128-channel 3D Li- DAR dataset with panoramic ambient (near infrared) and reflectivity imagery, as well as a sample benchmark task using depth estimation for autonomous driving applications. Our driving platform is equipped with a high resolution 128 channel LiDAR, a 2MPix stereo camera, a lux meter and a GNSS/INS system. Ambient and reflectivity images are made available along with the LiDAR point clouds to facilitate multi-modal use of concurrent ambient and reflectivity scene information. Leveraging DurLAR, with a resolution exceeding that of prior benchmarks, we consider the task of monocular depth estimation and use this increased availability of higher resolution, yet sparse ground truth scene depth information to propose a novel joint supervised/selfsupervised loss formulation. We compare performance over both our new DurLAR dataset, the established KITTI benchmark and the Cityscapes dataset. Our evaluation shows our joint use supervised and self-supervised loss terms, enabled via the superior ground truth resolution and availability within DurLAR improves the quantitative and qualitative performance of leading contemporary monocular depth estimation approaches (RMSE = 3.639, SqRel = 0.936).


Downloads


YouTube


Cite This Research

Plain Text

Li Li, Khalid N. Ismail, Hubert P. H. Shum and Toby P. Breckon, "DurLAR: A High-fidelity 128-Channel LiDAR Dataset with Panoramic Ambientand Reflectivity Imagery for Multi-Modal Autonomous Driving Applications," in 3DV '21: Proceedings of the 2021 International Conference on 3D Vision, pp. 1227-1237, IEEE, Dec 2021.

BibTeX

@inproceedings{li21durlar,
 author={Li, Li and Ismail, Khalid N. and Shum, Hubert P. H. and Breckon, Toby P.},
 booktitle={Proceedings of the 2021 International Conference on 3D Vision},
 series={3DV '21},
 title={DurLAR: A High-fidelity 128-Channel LiDAR Dataset with Panoramic Ambientand Reflectivity Imagery for Multi-Modal Autonomous Driving Applications},
 year={2021},
 month={12},
 pages={1227--1237},
 numpages={11},
 doi={10.1109/3DV53792.2021.00130},
 publisher={IEEE},
}

RIS

TY  - CONF
AU  - Li, Li
AU  - Ismail, Khalid N.
AU  - Shum, Hubert P. H.
AU  - Breckon, Toby P.
T2  - Proceedings of the 2021 International Conference on 3D Vision
TI  - DurLAR: A High-fidelity 128-Channel LiDAR Dataset with Panoramic Ambientand Reflectivity Imagery for Multi-Modal Autonomous Driving Applications
PY  - 2021
Y1  - 12 2021
SP  - 1227
EP  - 1237
DO  - 10.1109/3DV53792.2021.00130
PB  - IEEE
ER  - 


Supporting Grants


Similar Research

Qi Feng, Hubert P. H. Shum and Shigeo Morishima, "360 Depth Estimation in the Wild - The Depth360 Dataset and the SegFuse Network", Proceedings of the 2022 IEEE Conference on Virtual Reality and 3D User Interfaces (VR), 2022
Li Li, Hubert P. H. Shum and Toby P. Breckon, "Less is More: Reducing Task and Model Complexity for 3D Point Cloud Semantic Segmentation", Proceedings of the 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2023
Li Li, Hubert P. H. Shum and Toby P. Breckon, "RAPiD-Seg: Range-Aware Pointwise Distance Distribution Networks for 3D LiDAR Segmentation", Proceedings of the 2024 European Conference on Computer Vision (ECCV), 2024
Li Li, Tanqiu Qiao, Hubert P. H. Shum and Toby P. Breckon, "TraIL-Det: Transformation-Invariant Local Feature Networks for 3D LiDAR Object Detection with Unsupervised Pre-Training", Proceedings of the 2024 British Machine Vision Conference (BMVC), 2024

 

Last updated on 7 September 2024
RSS Feed