Core A* Conference‡ H5-Index: 422# Core A* Conference‡
Whilst the availability of 3D LiDAR point cloud data has significantly grown in recent years, annotation remains expensive and time-consuming, leading to a demand for semi-supervised semantic segmentation methods with application domains such as autonomous driving. Existing work very often employs relatively large segmentation backbone networks to improve segmentation accuracy, at the expense of computational costs. In addition, many use uniform sampling to reduce ground truth data requirements for learning needed, often resulting in sub-optimal performance. To address these issues, we propose a new pipeline that employs a smaller architecture, requiring fewer ground-truth annotations to achieve superior segmentation accuracy compared to contemporary approaches. This is facilitated via a novel Sparse Depthwise Separable Convolution module that significantly reduces the network parameter count while retaining overall task performance. To effectively sub-sample our training data, we propose a new Spatio-Temporal Redundant Frame Downsampling (ST-RFD) method that leverages knowledge of sensor motion within the environment to extract a more diverse subset of training data frame samples. To leverage the use of limited annotated data samples, we further propose a soft pseudo-label method informed by LiDAR reflectivity. Our method outperforms contemporary semi-supervised work in terms of mIoU, using less labeled data, on the SemanticKITTI (59.5@5%) and ScribbleKITTI (58.1@5%) benchmark datasets, based on a 2.3X reduction in model parameters and 641X fewer multiply-add operations whilst also demonstrating significant performance improvement on limited training data (i.e., Less is More).
TY - CONF
Li Li, Hubert P. H. Shum and Toby P. Breckon, "Less is More: Reducing Task and Model Complexity for 3D Point Cloud Semantic Segmentation," in CVPR '23: Proceedings of the 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9361-9371, Vancouver, Canada, IEEE/CVF, 6 2023.
Last updated on 27 September 2023