Research Publications - Environment Sensing

Select a Topic:​ All Motion Analysis Character Animation Interaction Modelling Video Analysis Action Recognition 3D Reconstruction Healthcare Diagnosis Virtual Reality Environment Sensing Surface Modelling Face Modelling Responsible AI Robotics Crowd Modelling Hands and Gestures Surveillance Cybersecurity Medical Imaging

Sort By:​YearTypeCitationImpact Factor

We enable machines' understanding of the surrounding environment through advanced machine learning algorithms and a wide range of sensors, including LiDAR, RGB-D cameras and 360 cameras, covering tasks such as scene segmentation, object detection, depth estimation and environment capturing.

Insterested in our research? Consider joining us.


Impact Factor 0.0+

RAPiD-Seg: Range-Aware Pointwise Distance Distribution Networks for 3D LiDAR Segmentation
RAPiD-Seg: Range-Aware Pointwise Distance Distribution Networks for 3D LiDAR Segmentation Oral Paper (Top 2.3% of 8585 Submissions) H5-Index: 206# Core A* Conference
Proceedings of the 2024 European Conference on Computer Vision (ECCV), 2024
Li Li, Hubert P. H. Shum and Toby P. Breckon
Webpage Cite This Paper Supplementary Material YouTube Part 1 YouTube Part 2
U3DS3: Unsupervised 3D Semantic Scene Segmentation
U3DS3: Unsupervised 3D Semantic Scene Segmentation  H5-Index: 109# Core A Conference
Proceedings of the 2024 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), 2024
Jiaxu Liu, Zhengdi Yu, Toby P. Breckon and Hubert P. H. Shum
Webpage Cite This Paper Supplementary Material YouTube
TraIL-Det: Transformation-Invariant Local Feature Networks for 3D LiDAR Object Detection with Unsupervised Pre-Training
TraIL-Det: Transformation-Invariant Local Feature Networks for 3D LiDAR Object Detection with Unsupervised Pre-Training  H5-Index: 65#
Proceedings of the 2024 British Machine Vision Conference (BMVC), 2024
Li Li, Tanqiu Qiao, Hubert P. H. Shum and Toby P. Breckon
Webpage Cite This Paper Supplementary Material
Less is More: Reducing Task and Model Complexity for 3D Point Cloud Semantic Segmentation
Less is More: Reducing Task and Model Complexity for 3D Point Cloud Semantic Segmentation  Core A* Conference Citation: 36#
Proceedings of the 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2023
Li Li, Hubert P. H. Shum and Toby P. Breckon
Webpage Cite This Paper Supplementary Material GitHub YouTube
Enhancing Perception and Immersion in Pre-Captured Environments through Learning-Based Eye Height Adaptation
Enhancing Perception and Immersion in Pre-Captured Environments through Learning-Based Eye Height Adaptation  Core A* Conference
Proceedings of the 2023 IEEE International Symposium on Mixed and Augmented Reality (ISMAR), 2023
Qi Feng, Hubert P. H. Shum and Shigeo Morishima
Webpage Cite This Paper YouTube
360 Depth Estimation in the Wild - The Depth360 Dataset and the SegFuse Network
360 Depth Estimation in the Wild - The Depth360 Dataset and the SegFuse Network  Core A* Conference Citation: 16#
Proceedings of the 2022 IEEE Conference on Virtual Reality and 3D User Interfaces (VR), 2022
Qi Feng, Hubert P. H. Shum and Shigeo Morishima
Webpage Cite This Paper YouTube Part 1 YouTube Part 2 Presentation Slides
DurLAR: A High-fidelity 128-Channel LiDAR Dataset with Panoramic Ambientand Reflectivity Imagery for Multi-Modal Autonomous Driving Applications
DurLAR: A High-fidelity 128-Channel LiDAR Dataset with Panoramic Ambientand Reflectivity Imagery for Multi-Modal Autonomous Driving Applications  H5-Index: 51# Citation: 19#
Proceedings of the 2021 International Conference on 3D Vision (3DV), 2021
Li Li, Khalid N. Ismail, Hubert P. H. Shum and Toby P. Breckon
Webpage Cite This Paper Dataset GitHub YouTube
Bi-Projection Based Foreground-Aware Omnidirectional Depth Prediction
Bi-Projection Based Foreground-Aware Omnidirectional Depth Prediction
Proceedings of the 2021 Visual Computing (VC), 2021
Qi Feng, Hubert P. H. Shum and Shigeo Morishima
Webpage Cite This Paper
Foreground-Aware Dense Depth Estimation for 360 Images
Foreground-Aware Dense Depth Estimation for 360 Images
Journal of WSCG - Proceedings of the 2020 International Conferences in Central Europe on Computer Graphics, Visualization and Computer Vision (WSCG), 2020
Qi Feng, Hubert P. H. Shum and Shigeo Morishima
Webpage Cite This Paper Supplementary Material YouTube Presentation Slides
Depth Sensor Based Facial and Body Animation Control
Depth Sensor Based Facial and Body Animation Control
Book Chapter: Handbook of Human Motion, 2016
Yijun Shen, Jingtian Zhang, Longzhi Yang and Hubert P. H. Shum
Webpage Cite This Paper
Serious Games with Human-Object Interactions using RGB-D Camera
Serious Games with Human-Object Interactions using RGB-D Camera
Proceedings of the 2013 ACM International Conference on Motion in Games (MIG) Posters, 2013
Hubert P. H. Shum
Webpage Cite This Paper
Environment Capturing with Microsoft Kinect
Environment Capturing with Microsoft Kinect
Proceedings of the 2012 International Conference on Software, Knowledge, Information Management and Applications (SKIMA), 2012
Kevin Mackay, Hubert P. H. Shum and Taku Komura
Webpage Cite This Paper


† According to Journal Citation Reports 2023
‡ According to Core Ranking 2023
# According to Google Scholar 2024


 

RSS Feed