Less is More: Reducing Task and Model Complexity for 3D Point Cloud Semantic Segmentation

Li Li, Hubert P. H. Shum and Toby P. Breckon
Proceedings of the 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2023

Core A* Conference H5-Index: 422# Core A* Conference Citation: 11#

Less is More: Reducing Task and Model Complexity for 3D Point Cloud Semantic Segmentation
‡ According to Core Ranking 2023"
# According to Google Scholar 2023"

Abstract

Whilst the availability of 3D LiDAR point cloud data has significantly grown in recent years, annotation remains expensive and time-consuming, leading to a demand for semi-supervised semantic segmentation methods with application domains such as autonomous driving. Existing work very often employs relatively large segmentation backbone networks to improve segmentation accuracy, at the expense of computational costs. In addition, many use uniform sampling to reduce ground truth data requirements for learning needed, often resulting in sub-optimal performance. To address these issues, we propose a new pipeline that employs a smaller architecture, requiring fewer ground-truth annotations to achieve superior segmentation accuracy compared to contemporary approaches. This is facilitated via a novel Sparse Depthwise Separable Convolution module that significantly reduces the network parameter count while retaining overall task performance. To effectively sub-sample our training data, we propose a new Spatio-Temporal Redundant Frame Downsampling (ST-RFD) method that leverages knowledge of sensor motion within the environment to extract a more diverse subset of training data frame samples. To leverage the use of limited annotated data samples, we further propose a soft pseudo-label method informed by LiDAR reflectivity. Our method outperforms contemporary semi-supervised work in terms of mIoU, using less labeled data, on the SemanticKITTI (59.5@5%) and ScribbleKITTI (58.1@5%) benchmark datasets, based on a 2.3X reduction in model parameters and 641X fewer multiply-add operations whilst also demonstrating significant performance improvement on limited training data (i.e., Less is More).

Downloads

YouTube

Citations

BibTeX

@inproceedings{li23less,
 author={Li, Li and Shum, Hubert P. H. and Breckon, Toby P.},
 booktitle={Proceedings of the 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition},
 series={CVPR '23},
 title={Less is More: Reducing Task and Model Complexity for 3D Point Cloud Semantic Segmentation},
 year={2023},
 month={6},
 pages={9361--9371},
 numpages={11},
 doi={10.1109/CVPR52729.2023.00903},
 publisher={IEEE/CVF},
 location={Vancouver, Canada},
}

RIS

TY  - CONF
AU  - Li, Li
AU  - Shum, Hubert P. H.
AU  - Breckon, Toby P.
T2  - Proceedings of the 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition
TI  - Less is More: Reducing Task and Model Complexity for 3D Point Cloud Semantic Segmentation
PY  - 2023
Y1  - 6 2023
SP  - 9361
EP  - 9371
DO  - 10.1109/CVPR52729.2023.00903
PB  - IEEE/CVF
ER  - 

Plain Text

Li Li, Hubert P. H. Shum and Toby P. Breckon, "Less is More: Reducing Task and Model Complexity for 3D Point Cloud Semantic Segmentation," in CVPR '23: Proceedings of the 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9361-9371, Vancouver, Canada, IEEE/CVF, Jun 2023.

Supporting Grants

Similar Research

Jiaxu Liu, Zhengdi Yu, Toby P. Breckon and Hubert P. H. Shum, "U3DS3: Unsupervised 3D Semantic Scene Segmentation", Proceedings of the 2024 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), 2024
Li Li, Khalid N. Ismail, Hubert P. H. Shum and Toby P. Breckon, "DurLAR: A High-fidelity 128-Channel LiDAR Dataset with Panoramic Ambientand Reflectivity Imagery for Multi-Modal Autonomous Driving Applications", Proceedings of the 2021 International Conference on 3D Vision (3DV), 2021
Qi Feng, Hubert P. H. Shum and Shigeo Morishima, "360 Depth Estimation in the Wild - The Depth360 Dataset and the SegFuse Network", Proceedings of the 2022 IEEE Conference on Virtual Reality and 3D User Interfaces (VR), 2022

 

 

Last updated on 14 April 2024
RSS Feed