Research Publications - Air and Space

Select a Topic:​ All Motion Analysis Character Animation Interaction Modelling Video Analysis Action Recognition 3D Reconstruction Healthcare Diagnosis Crowd Modelling Environment Sensing Virtual Reality Surface Modelling Face Modelling Responsible AI Robotics Hands and Gestures Surveillance Cybersecurity Artwork Analysis Medical Imaging Air and Space

Sort By:​YearTypeCitationImpact Factor

As a part of Durham University Space Research Centre, we study the computer science aspects of space technologies, as well as the complementary air capacity such as the controls and detection of unmanned aerial vehicles (UAVs). This research is supported by my Ministry of Defence (DASA) project and my UK Catapult Network project.

Insterested in our research? Consider joining us.


Impact Factor 7.0+

Formation Control for UAVs Using a Flux Guided Approach
Formation Control for UAVs Using a Flux Guided Approach Impact Factor: 7.5Top 25% Journal in Computer Science, Artificial Intelligence
Expert Systems with Applications (ESWA), 2022
John Hartley, Hubert P. H. Shum, Edmond S. L. Ho, He Wang and Subramanian Ramamoorthy
Webpage Cite This Plain Text
John Hartley, Hubert P. H. Shum, Edmond S. L. Ho, He Wang and Subramanian Ramamoorthy, "Formation Control for UAVs Using a Flux Guided Approach," Expert Systems with Applications, vol. 205, pp. 117665, Elsevier, 2022.
Bibtex
@article{hartley21formation,
 author={Hartley, John and Shum, Hubert P. H. and Ho, Edmond S. L. and Wang, He and Ramamoorthy, Subramanian},
 journal={Expert Systems with Applications},
 series={ESWA '24},
 title={Formation Control for UAVs Using a Flux Guided Approach},
 year={2022},
 volume={205},
 pages={117665},
 numpages={11},
 doi={10.1016/j.eswa.2022.117665},
 issn={0957-4174},
 publisher={Elsevier},
}
RIS
TY  - JOUR
AU  - Hartley, John
AU  - Shum, Hubert P. H.
AU  - Ho, Edmond S. L.
AU  - Wang, He
AU  - Ramamoorthy, Subramanian
T2  - Expert Systems with Applications
TI  - Formation Control for UAVs Using a Flux Guided Approach
PY  - 2022
VL  - 205
SP  - 117665
EP  - 117665
DO  - 10.1016/j.eswa.2022.117665
SN  - 0957-4174
PB  - Elsevier
ER  - 
Paper YouTube

Impact Factor 0.0+

A Mixed Reality Training System for Hand-Object Interaction in Simulated Microgravity Environments
A Mixed Reality Training System for Hand-Object Interaction in Simulated Microgravity Environments Core A* Conference
Proceedings of the 2023 IEEE International Symposium on Mixed and Augmented Reality (ISMAR), 2023
Kanglei Zhou, Chen Chen, Yue Ma, Zhiying Leng, Hubert P. H. Shum, Frederick W. B. Li and Xiaohui Liang
Webpage Cite This Plain Text
Kanglei Zhou, Chen Chen, Yue Ma, Zhiying Leng, Hubert P. H. Shum, Frederick W. B. Li and Xiaohui Liang, "A Mixed Reality Training System for Hand-Object Interaction in Simulated Microgravity Environments," in ISMAR '23: Proceedings of the 2023 IEEE International Symposium on Mixed and Augmented Reality, pp. 167-176, Sydney, Australia, IEEE, Oct 2023.
Bibtex
@inproceedings{zhou23mixed,
 author={Zhou, Kanglei and Chen, Chen and Ma, Yue and Leng, Zhiying and Shum, Hubert P. H. and Li, Frederick W. B. and Liang, Xiaohui},
 booktitle={Proceedings of the 2023 IEEE International Symposium on Mixed and Augmented Reality},
 series={ISMAR '23},
 title={A Mixed Reality Training System for Hand-Object Interaction in Simulated Microgravity Environments},
 year={2023},
 month={10},
 pages={167--176},
 numpages={10},
 doi={10.1109/ISMAR59233.2023.00031},
 publisher={IEEE},
 location={Sydney, Australia},
}
RIS
TY  - CONF
AU  - Zhou, Kanglei
AU  - Chen, Chen
AU  - Ma, Yue
AU  - Leng, Zhiying
AU  - Shum, Hubert P. H.
AU  - Li, Frederick W. B.
AU  - Liang, Xiaohui
T2  - Proceedings of the 2023 IEEE International Symposium on Mixed and Augmented Reality
TI  - A Mixed Reality Training System for Hand-Object Interaction in Simulated Microgravity Environments
PY  - 2023
Y1  - 10 2023
SP  - 167
EP  - 176
DO  - 10.1109/ISMAR59233.2023.00031
PB  - IEEE
ER  - 
Paper YouTube
UAV-ReID: A Benchmark on Unmanned Aerial Vehicle Re-Identification in Video Imagery
UAV-ReID: A Benchmark on Unmanned Aerial Vehicle Re-Identification in Video Imagery
Proceedings of the 2022 International Conference on Computer Vision Theory and Applications (VISAPP), 2022
Daniel Organisciak, Matthew Poyser, Aishah Alsehaim, Shanfeng Hu, Brian K. S. Isaac-Medina, Toby P. Breckon and Hubert P. H. Shum
Webpage Cite This Plain Text
Daniel Organisciak, Matthew Poyser, Aishah Alsehaim, Shanfeng Hu, Brian K. S. Isaac-Medina, Toby P. Breckon and Hubert P. H. Shum, "UAV-ReID: A Benchmark on Unmanned Aerial Vehicle Re-Identification in Video Imagery," in VISAPP '22: Proceedings of the 2022 International Conference on Computer Vision Theory and Applications, pp. 136-146, SciTePress, Feb 2022.
Bibtex
@inproceedings{organisciak22uavreid,
 author={Organisciak, Daniel and Poyser, Matthew and Alsehaim, Aishah and Hu, Shanfeng and Isaac-Medina, Brian K. S. and Breckon, Toby P. and Shum, Hubert P. H.},
 booktitle={Proceedings of the 2022 International Conference on Computer Vision Theory and Applications},
 series={VISAPP '22},
 title={UAV-ReID: A Benchmark on Unmanned Aerial Vehicle Re-Identification in Video Imagery},
 year={2022},
 month={2},
 pages={136--146},
 numpages={11},
 doi={10.5220/0010836600003124},
 isbn={978-989-758-555-5},
 publisher={SciTePress},
}
RIS
TY  - CONF
AU  - Organisciak, Daniel
AU  - Poyser, Matthew
AU  - Alsehaim, Aishah
AU  - Hu, Shanfeng
AU  - Isaac-Medina, Brian K. S.
AU  - Breckon, Toby P.
AU  - Shum, Hubert P. H.
T2  - Proceedings of the 2022 International Conference on Computer Vision Theory and Applications
TI  - UAV-ReID: A Benchmark on Unmanned Aerial Vehicle Re-Identification in Video Imagery
PY  - 2022
Y1  - 2 2022
SP  - 136
EP  - 146
DO  - 10.5220/0010836600003124
SN  - 978-989-758-555-5
PB  - SciTePress
ER  - 
Paper GitHub
Unmanned Aerial Vehicle Visual Detection and Tracking using Deep Neural Networks: A Performance Benchmark
Unmanned Aerial Vehicle Visual Detection and Tracking using Deep Neural Networks: A Performance Benchmark H5-Index: 80#Citation: 90#
Proceedings of the 2021 IEEE/CVF International Conference on Computer Vision Workshops (ICCVW), 2021
Brian K. S. Isaac-Medina, Matthew Poyser, Daniel Organisciak, Chris G. Willcocks, Toby P. Breckon and Hubert P. H. Shum
Webpage Cite This Plain Text
Brian K. S. Isaac-Medina, Matthew Poyser, Daniel Organisciak, Chris G. Willcocks, Toby P. Breckon and Hubert P. H. Shum, "Unmanned Aerial Vehicle Visual Detection and Tracking using Deep Neural Networks: A Performance Benchmark," in ICCVW '21: Proceedings of the 2021 IEEE/CVF International Conference on Computer Vision Workshops, pp. 1223-1232, IEEE/CVF, Oct 2021.
Bibtex
@inproceedings{issacmedina21unmanned,
 author={Isaac-Medina, Brian K. S. and Poyser, Matthew and Organisciak, Daniel and Willcocks, Chris G. and Breckon, Toby P. and Shum, Hubert P. H.},
 booktitle={Proceedings of the 2021 IEEE/CVF International Conference on Computer Vision Workshops},
 series={ICCVW '21},
 title={Unmanned Aerial Vehicle Visual Detection and Tracking using Deep Neural Networks: A Performance Benchmark},
 year={2021},
 month={10},
 pages={1223--1232},
 numpages={10},
 doi={10.1109/ICCVW54120.2021.00142},
 publisher={IEEE/CVF},
}
RIS
TY  - CONF
AU  - Isaac-Medina, Brian K. S.
AU  - Poyser, Matthew
AU  - Organisciak, Daniel
AU  - Willcocks, Chris G.
AU  - Breckon, Toby P.
AU  - Shum, Hubert P. H.
T2  - Proceedings of the 2021 IEEE/CVF International Conference on Computer Vision Workshops
TI  - Unmanned Aerial Vehicle Visual Detection and Tracking using Deep Neural Networks: A Performance Benchmark
PY  - 2021
Y1  - 10 2021
SP  - 1223
EP  - 1232
DO  - 10.1109/ICCVW54120.2021.00142
PB  - IEEE/CVF
ER  - 
Paper GitHub


† According to Journal Citation Reports 2023
‡ According to Core Ranking 2023
# According to Google Scholar 2025


Home Google Scholar YouTube LinkedIn Twitter/X GitHub ORCID ResearchGate Email
 
Print

RSS Feed