Research Publications - Environment Sensing

Select a Topic:​ All Motion Analysis Character Animation Interaction Modelling Video Analysis Action Recognition 3D Reconstruction Healthcare Diagnosis Crowd Modelling Environment Sensing Virtual Reality Responsible AI Surface Modelling Face Modelling Robotics Artwork Analysis Hands and Gestures Surveillance Cybersecurity Air and Space

Sort By:​YearTypeCitationImpact Factor


We enable machines' understanding of the surrounding environment through advanced machine learning algorithms and a wide range of sensors, including LiDAR, RGB-D cameras and 360 cameras, covering tasks such as scene segmentation, object detection, depth estimation and environment capturing.

Insterested in our research? Consider joining us.

Impact Factor 0.0+

RAPiD-Seg: Range-Aware Pointwise Distance Distribution Networks for 3D LiDAR Segmentation
RAPiD-Seg: Range-Aware Pointwise Distance Distribution Networks for 3D LiDAR Segmentation Oral Paper (Top 2.3% of 8585 Submissions)H5-Index: 206#Core A* Conference
Proceedings of the 2024 European Conference on Computer Vision (ECCV), 2024
Li Li, Hubert P. H. Shum and Toby P. Breckon
Webpage Cite This Plain Text
Li Li, Hubert P. H. Shum and Toby P. Breckon, "RAPiD-Seg: Range-Aware Pointwise Distance Distribution Networks for 3D LiDAR Segmentation," in ECCV '24: Proceedings of the 2024 European Conference on Computer Vision, vol. 15065, pp. 222-241, Milan, Italy, Springer, 2024.
Bibtex
@inproceedings{li24rapidseg,
 author={Li, Li and Shum, Hubert P. H. and Breckon, Toby P.},
 booktitle={Proceedings of the 2024 European Conference on Computer Vision},
 series={ECCV '24},
 title={RAPiD-Seg: Range-Aware Pointwise Distance Distribution Networks for 3D LiDAR Segmentation},
 year={2024},
 volume={15065},
 pages={222--241},
 numpages={20},
 doi={10.1007/978-3-031-72667-5_13},
 publisher={Springer},
 location={Milan, Italy},
}
RIS
TY  - CONF
AU  - Li, Li
AU  - Shum, Hubert P. H.
AU  - Breckon, Toby P.
T2  - Proceedings of the 2024 European Conference on Computer Vision
TI  - RAPiD-Seg: Range-Aware Pointwise Distance Distribution Networks for 3D LiDAR Segmentation
PY  - 2024
VL  - 15065
SP  - 222
EP  - 241
DO  - 10.1007/978-3-031-72667-5_13
PB  - Springer
ER  - 
Paper Supplementary Material YouTube Part 1 YouTube Part 2
U3DS3: Unsupervised 3D Semantic Scene Segmentation
U3DS3: Unsupervised 3D Semantic Scene Segmentation H5-Index: 109#Core A ConferenceCitation: 14#
Proceedings of the 2024 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), 2024
Jiaxu Liu, Zhengdi Yu, Toby P. Breckon and Hubert P. H. Shum
Webpage Cite This Plain Text
Jiaxu Liu, Zhengdi Yu, Toby P. Breckon and Hubert P. H. Shum, "U3DS3: Unsupervised 3D Semantic Scene Segmentation," in WACV '24: Proceedings of the 2024 IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3759-3768, Hawaii, USA, IEEE/CVF, Jan 2024.
Bibtex
@inproceedings{liu24u3ds3,
 author={Liu, Jiaxu and Yu, Zhengdi and Breckon, Toby P. and Shum, Hubert P. H.},
 booktitle={Proceedings of the 2024 IEEE/CVF Winter Conference on Applications of Computer Vision},
 series={WACV '24},
 title={U3DS3: Unsupervised 3D Semantic Scene Segmentation},
 year={2024},
 month={1},
 pages={3759--3768},
 numpages={10},
 doi={10.1109/WACV57701.2024.00372},
 publisher={IEEE/CVF},
 location={Hawaii, USA},
}
RIS
TY  - CONF
AU  - Liu, Jiaxu
AU  - Yu, Zhengdi
AU  - Breckon, Toby P.
AU  - Shum, Hubert P. H.
T2  - Proceedings of the 2024 IEEE/CVF Winter Conference on Applications of Computer Vision
TI  - U3DS3: Unsupervised 3D Semantic Scene Segmentation
PY  - 2024
Y1  - 1 2024
SP  - 3759
EP  - 3768
DO  - 10.1109/WACV57701.2024.00372
PB  - IEEE/CVF
ER  - 
Paper Supplementary Material YouTube
TraIL-Det: Transformation-Invariant Local Feature Networks for 3D LiDAR Object Detection with Unsupervised Pre-Training
TraIL-Det: Transformation-Invariant Local Feature Networks for 3D LiDAR Object Detection with Unsupervised Pre-Training H5-Index: 65#
Proceedings of the 2024 British Machine Vision Conference (BMVC), 2024
Li Li, Tanqiu Qiao, Hubert P. H. Shum and Toby P. Breckon
Webpage Cite This Plain Text
Li Li, Tanqiu Qiao, Hubert P. H. Shum and Toby P. Breckon, "TraIL-Det: Transformation-Invariant Local Feature Networks for 3D LiDAR Object Detection with Unsupervised Pre-Training," in BMVC '24: Proceedings of the 2024 British Machine Vision Conference, Glasgow, UK, 2024.
Bibtex
@inproceedings{li24traildet,
 author={Li, Li and Qiao, Tanqiu and Shum, Hubert P. H. and Breckon, Toby P.},
 booktitle={Proceedings of the 2024 British Machine Vision Conference},
 series={BMVC '24},
 title={TraIL-Det: Transformation-Invariant Local Feature Networks for 3D LiDAR Object Detection with Unsupervised Pre-Training},
 year={2024},
 location={Glasgow, UK},
}
RIS
TY  - CONF
AU  - Li, Li
AU  - Qiao, Tanqiu
AU  - Shum, Hubert P. H.
AU  - Breckon, Toby P.
T2  - Proceedings of the 2024 British Machine Vision Conference
TI  - TraIL-Det: Transformation-Invariant Local Feature Networks for 3D LiDAR Object Detection with Unsupervised Pre-Training
PY  - 2024
ER  - 
Paper Supplementary Material
Less is More: Reducing Task and Model Complexity for 3D Point Cloud Semantic Segmentation
Less is More: Reducing Task and Model Complexity for 3D Point Cloud Semantic Segmentation Core A* ConferenceCitation: 46#
Proceedings of the 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2023
Li Li, Hubert P. H. Shum and Toby P. Breckon
Webpage Cite This Plain Text
Li Li, Hubert P. H. Shum and Toby P. Breckon, "Less is More: Reducing Task and Model Complexity for 3D Point Cloud Semantic Segmentation," in CVPR '23: Proceedings of the 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9361-9371, Vancouver, Canada, IEEE/CVF, Jun 2023.
Bibtex
@inproceedings{li23less,
 author={Li, Li and Shum, Hubert P. H. and Breckon, Toby P.},
 booktitle={Proceedings of the 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition},
 series={CVPR '23},
 title={Less is More: Reducing Task and Model Complexity for 3D Point Cloud Semantic Segmentation},
 year={2023},
 month={6},
 pages={9361--9371},
 numpages={11},
 doi={10.1109/CVPR52729.2023.00903},
 publisher={IEEE/CVF},
 location={Vancouver, Canada},
}
RIS
TY  - CONF
AU  - Li, Li
AU  - Shum, Hubert P. H.
AU  - Breckon, Toby P.
T2  - Proceedings of the 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition
TI  - Less is More: Reducing Task and Model Complexity for 3D Point Cloud Semantic Segmentation
PY  - 2023
Y1  - 6 2023
SP  - 9361
EP  - 9371
DO  - 10.1109/CVPR52729.2023.00903
PB  - IEEE/CVF
ER  - 
Paper Supplementary Material GitHub YouTube
Enhancing Perception and Immersion in Pre-Captured Environments through Learning-Based Eye Height Adaptation
Enhancing Perception and Immersion in Pre-Captured Environments through Learning-Based Eye Height Adaptation Core A* Conference
Proceedings of the 2023 IEEE International Symposium on Mixed and Augmented Reality (ISMAR), 2023
Qi Feng, Hubert P. H. Shum and Shigeo Morishima
Webpage Cite This Plain Text
Qi Feng, Hubert P. H. Shum and Shigeo Morishima, "Enhancing Perception and Immersion in Pre-Captured Environments through Learning-Based Eye Height Adaptation," in ISMAR '23: Proceedings of the 2023 IEEE International Symposium on Mixed and Augmented Reality, pp. 405-414, Sydney, Australia, IEEE, Oct 2023.
Bibtex
@inproceedings{feng23enhancing,
 author={Feng, Qi and Shum, Hubert P. H. and Morishima, Shigeo},
 booktitle={Proceedings of the 2023 IEEE International Symposium on Mixed and Augmented Reality},
 series={ISMAR '23},
 title={Enhancing Perception and Immersion in Pre-Captured Environments through Learning-Based Eye Height Adaptation},
 year={2023},
 month={10},
 pages={405-414},
 numpages={10},
 doi={10.1109/ISMAR59233.2023.00055},
 publisher={IEEE},
 location={Sydney, Australia},
}
RIS
TY  - CONF
AU  - Feng, Qi
AU  - Shum, Hubert P. H.
AU  - Morishima, Shigeo
T2  - Proceedings of the 2023 IEEE International Symposium on Mixed and Augmented Reality
TI  - Enhancing Perception and Immersion in Pre-Captured Environments through Learning-Based Eye Height Adaptation
PY  - 2023
Y1  - 10 2023
SP  - 405-414
EP  - 405-414
DO  - 10.1109/ISMAR59233.2023.00055
PB  - IEEE
ER  - 
Paper YouTube
Region-Based Appearance and Flow Characteristics for Anomaly Detection in Infrared Surveillance Imagery
Region-Based Appearance and Flow Characteristics for Anomaly Detection in Infrared Surveillance Imagery H5-Index: 115#
Proceedings of the 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), 2023
Yona Falinie A. Gaus, Neelanjan Bhowmik, Brian K. S. Isaac-Medina, Hubert P. H. Shum, Amir Atapour-Abarghouei and Toby P. Breckon
Webpage Cite This Plain Text
Yona Falinie A. Gaus, Neelanjan Bhowmik, Brian K. S. Isaac-Medina, Hubert P. H. Shum, Amir Atapour-Abarghouei and Toby P. Breckon, "Region-Based Appearance and Flow Characteristics for Anomaly Detection in Infrared Surveillance Imagery," in CVPRW '23: Proceedings of the 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 2995-3005, Vancouver, Canada, IEEE/CVF, Jun 2023.
Bibtex
@inproceedings{gaus23region,
 author={Gaus, Yona Falinie A. and Bhowmik, Neelanjan and Isaac-Medina, Brian K. S. and Shum, Hubert P. H. and Atapour-Abarghouei, Amir and Breckon, Toby P.},
 booktitle={Proceedings of the 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops},
 series={CVPRW '23},
 title={Region-Based Appearance and Flow Characteristics for Anomaly Detection in Infrared Surveillance Imagery},
 year={2023},
 month={6},
 pages={2995--3005},
 numpages={11},
 doi={10.1109/CVPRW59228.2023.00301},
 publisher={IEEE/CVF},
 location={Vancouver, Canada},
}
RIS
TY  - CONF
AU  - Gaus, Yona Falinie A.
AU  - Bhowmik, Neelanjan
AU  - Isaac-Medina, Brian K. S.
AU  - Shum, Hubert P. H.
AU  - Atapour-Abarghouei, Amir
AU  - Breckon, Toby P.
T2  - Proceedings of the 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops
TI  - Region-Based Appearance and Flow Characteristics for Anomaly Detection in Infrared Surveillance Imagery
PY  - 2023
Y1  - 6 2023
SP  - 2995
EP  - 3005
DO  - 10.1109/CVPRW59228.2023.00301
PB  - IEEE/CVF
ER  - 
Paper
360 Depth Estimation in the Wild - The Depth360 Dataset and the SegFuse Network
360 Depth Estimation in the Wild - The Depth360 Dataset and the SegFuse Network Core A* ConferenceCitation: 25#
Proceedings of the 2022 IEEE Conference on Virtual Reality and 3D User Interfaces (VR), 2022
Qi Feng, Hubert P. H. Shum and Shigeo Morishima
Webpage Cite This Plain Text
Qi Feng, Hubert P. H. Shum and Shigeo Morishima, "360 Depth Estimation in the Wild - The Depth360 Dataset and the SegFuse Network," in VR '22: Proceedings of the 2022 IEEE Conference on Virtual Reality and 3D User Interfaces, pp. 664-673, IEEE, Mar 2022.
Bibtex
@inproceedings{feng22depth,
 author={Feng, Qi and Shum, Hubert P. H. and Morishima, Shigeo},
 booktitle={Proceedings of the 2022 IEEE Conference on Virtual Reality and 3D User Interfaces},
 series={VR '22},
 title={360 Depth Estimation in the Wild - The Depth360 Dataset and the SegFuse Network},
 year={2022},
 month={3},
 pages={664--673},
 numpages={10},
 doi={10.1109/VR51125.2022.00087},
 publisher={IEEE},
}
RIS
TY  - CONF
AU  - Feng, Qi
AU  - Shum, Hubert P. H.
AU  - Morishima, Shigeo
T2  - Proceedings of the 2022 IEEE Conference on Virtual Reality and 3D User Interfaces
TI  - 360 Depth Estimation in the Wild - The Depth360 Dataset and the SegFuse Network
PY  - 2022
Y1  - 3 2022
SP  - 664
EP  - 673
DO  - 10.1109/VR51125.2022.00087
PB  - IEEE
ER  - 
Paper YouTube Part 1 YouTube Part 2
DurLAR: A High-fidelity 128-Channel LiDAR Dataset with Panoramic Ambientand Reflectivity Imagery for Multi-Modal Autonomous Driving Applications
DurLAR: A High-fidelity 128-Channel LiDAR Dataset with Panoramic Ambientand Reflectivity Imagery for Multi-Modal Autonomous Driving Applications H5-Index: 51#Citation: 20#
Proceedings of the 2021 International Conference on 3D Vision (3DV), 2021
Li Li, Khalid N. Ismail, Hubert P. H. Shum and Toby P. Breckon
Webpage Cite This Plain Text
Li Li, Khalid N. Ismail, Hubert P. H. Shum and Toby P. Breckon, "DurLAR: A High-fidelity 128-Channel LiDAR Dataset with Panoramic Ambientand Reflectivity Imagery for Multi-Modal Autonomous Driving Applications," in 3DV '21: Proceedings of the 2021 International Conference on 3D Vision, pp. 1227-1237, IEEE, Dec 2021.
Bibtex
@inproceedings{li21durlar,
 author={Li, Li and Ismail, Khalid N. and Shum, Hubert P. H. and Breckon, Toby P.},
 booktitle={Proceedings of the 2021 International Conference on 3D Vision},
 series={3DV '21},
 title={DurLAR: A High-fidelity 128-Channel LiDAR Dataset with Panoramic Ambientand Reflectivity Imagery for Multi-Modal Autonomous Driving Applications},
 year={2021},
 month={12},
 pages={1227--1237},
 numpages={11},
 doi={10.1109/3DV53792.2021.00130},
 publisher={IEEE},
}
RIS
TY  - CONF
AU  - Li, Li
AU  - Ismail, Khalid N.
AU  - Shum, Hubert P. H.
AU  - Breckon, Toby P.
T2  - Proceedings of the 2021 International Conference on 3D Vision
TI  - DurLAR: A High-fidelity 128-Channel LiDAR Dataset with Panoramic Ambientand Reflectivity Imagery for Multi-Modal Autonomous Driving Applications
PY  - 2021
Y1  - 12 2021
SP  - 1227
EP  - 1237
DO  - 10.1109/3DV53792.2021.00130
PB  - IEEE
ER  - 
Paper Dataset GitHub YouTube
Bi-Projection Based Foreground-Aware Omnidirectional Depth Prediction
Bi-Projection Based Foreground-Aware Omnidirectional Depth Prediction
Proceedings of the 2021 Visual Computing (VC), 2021
Qi Feng, Hubert P. H. Shum and Shigeo Morishima
Webpage Cite This Plain Text
Qi Feng, Hubert P. H. Shum and Shigeo Morishima, "Bi-Projection Based Foreground-Aware Omnidirectional Depth Prediction," in VC '21: Proceedings of the 2021 Visual Computing, Sep 2021.
Bibtex
@inproceedings{feng21biprojection,
 author={Feng, Qi and Shum, Hubert P. H. and Morishima, Shigeo},
 booktitle={Proceedings of the 2021 Visual Computing},
 series={VC '21},
 title={Bi-Projection Based Foreground-Aware Omnidirectional Depth Prediction},
 year={2021},
 month={9},
 numpages={6},
}
RIS
TY  - CONF
AU  - Feng, Qi
AU  - Shum, Hubert P. H.
AU  - Morishima, Shigeo
T2  - Proceedings of the 2021 Visual Computing
TI  - Bi-Projection Based Foreground-Aware Omnidirectional Depth Prediction
PY  - 2021
Y1  - 9 2021
ER  - 
Paper
Foreground-Aware Dense Depth Estimation for 360 Images
Foreground-Aware Dense Depth Estimation for 360 Images
Journal of WSCG - Proceedings of the 2020 International Conferences in Central Europe on Computer Graphics, Visualization and Computer Vision (WSCG), 2020
Qi Feng, Hubert P. H. Shum and Shigeo Morishima
Webpage Cite This Plain Text
Qi Feng, Hubert P. H. Shum and Shigeo Morishima, "Foreground-Aware Dense Depth Estimation for 360 Images," Journal of WSCG, vol. 28, no. 1--2, pp. 79-88, Plzen, Czech Republic, May 2020.
Bibtex
@article{feng20foreground,
 author={Feng, Qi and Shum, Hubert P. H. and Morishima, Shigeo},
 journal={Journal of WSCG},
 title={Foreground-Aware Dense Depth Estimation for 360 Images},
 year={2020},
 month={5},
 volume={28},
 number={1--2},
 pages={79--88},
 numpages={10},
 doi={10.24132/JWSCG.2020.28.10},
 issn={1213-6972},
 location={Plzen, Czech Republic},
}
RIS
TY  - JOUR
AU  - Feng, Qi
AU  - Shum, Hubert P. H.
AU  - Morishima, Shigeo
T2  - Journal of WSCG
TI  - Foreground-Aware Dense Depth Estimation for 360 Images
PY  - 2020
Y1  - 5 2020
VL  - 28
IS  - 1--2
SP  - 79
EP  - 88
DO  - 10.24132/JWSCG.2020.28.10
SN  - 1213-6972
ER  - 
Paper Supplementary Material YouTube
Depth Sensor Based Facial and Body Animation Control
Depth Sensor Based Facial and Body Animation Control
Book Chapter: Handbook of Human Motion, 2016
Yijun Shen, Jingtian Zhang, Longzhi Yang and Hubert P. H. Shum
Webpage Cite This Plain Text
Yijun Shen, Jingtian Zhang, Longzhi Yang and Hubert P. H. Shum, "Depth Sensor Based Facial and Body Animation Control," in Handbook of Human Motion, Springer International Publishing, 2016.
Bibtex
@incollection{shen16depth,
 author={Shen, Yijun and Zhang, Jingtian and Yang, Longzhi and Shum, Hubert P. H.},
 booktitle={Handbook of Human Motion},
 title={Depth Sensor Based Facial and Body Animation Control},
 year={2016},
 numpages={16},
 doi={10.1007/978-3-319-30808-1_7-1},
 isbn={978-3-319-30808-1},
 publisher={Springer International Publishing},
 Address={Cham},
}
RIS
TY  - CHAP
AU  - Shen, Yijun
AU  - Zhang, Jingtian
AU  - Yang, Longzhi
AU  - Shum, Hubert P. H.
T2  - Handbook of Human Motion
TI  - Depth Sensor Based Facial and Body Animation Control
PY  - 2016
DO  - 10.1007/978-3-319-30808-1_7-1
SN  - 978-3-319-30808-1
PB  - Springer International Publishing
ER  - 
Paper
Serious Games with Human-Object Interactions using RGB-D Camera
Serious Games with Human-Object Interactions using RGB-D Camera
Proceedings of the 2013 ACM International Conference on Motion in Games (MIG) Posters, 2013
Hubert P. H. Shum
Webpage Cite This Plain Text
Hubert P. H. Shum, "Serious Games with Human-Object Interactions using RGB-D Camera," in MIG '13: Proceedings of the 2013 ACM International Conference on Motion in Games, Dublin, Ireland, Springer-Verlag, Nov 2013.
Bibtex
@inproceedings{shum13serious,
 author={Shum, Hubert P. H.},
 booktitle={Proceedings of the 2013 ACM International Conference on Motion in Games},
 series={MIG '13},
 title={Serious Games with Human-Object Interactions using RGB-D Camera},
 year={2013},
 month={11},
 numpages={1},
 publisher={Springer-Verlag},
 Address={Berlin, Heidelberg},
 location={Dublin, Ireland},
}
RIS
TY  - CONF
AU  - Shum, Hubert P. H.
T2  - Proceedings of the 2013 ACM International Conference on Motion in Games
TI  - Serious Games with Human-Object Interactions using RGB-D Camera
PY  - 2013
Y1  - 11 2013
PB  - Springer-Verlag
ER  - 
Paper
Environment Capturing with Microsoft Kinect
Environment Capturing with Microsoft Kinect
Proceedings of the 2012 International Conference on Software, Knowledge, Information Management and Applications (SKIMA), 2012
Kevin Mackay, Hubert P. H. Shum and Taku Komura
Webpage Cite This Plain Text
Kevin Mackay, Hubert P. H. Shum and Taku Komura, "Environment Capturing with Microsoft Kinect," in SKIMA '12: Proceedings of the 2012 International Conference on Software, Knowledge, Information Management and Applications, Dhaka, Bangladesh, Dec 2012.
Bibtex
@inproceedings{mackay12environment,
 author={Mackay, Kevin and Shum, Hubert P. H. and Komura, Taku},
 booktitle={Proceedings of the 2012 International Conference on Software, Knowledge, Information Management and Applications},
 series={SKIMA '12},
 title={Environment Capturing with Microsoft Kinect},
 year={2012},
 month={12},
 numpages={6},
 location={Dhaka, Bangladesh},
}
RIS
TY  - CONF
AU  - Mackay, Kevin
AU  - Shum, Hubert P. H.
AU  - Komura, Taku
T2  - Proceedings of the 2012 International Conference on Software, Knowledge, Information Management and Applications
TI  - Environment Capturing with Microsoft Kinect
PY  - 2012
Y1  - 12 2012
ER  - 
Paper

† According to Journal Citation Reports 2023
‡ According to Core Ranking 2023
# According to Google Scholar 2025


HomeGoogle ScholarYouTubeLinkedInTwitter/XGitHubORCIDResearchGateEmail
 
Print