[Recruitment]
Associate/Assistant Professor in Artificial Intelligence for Space-Enabled Technologies, Durham University

We are looking for applicants in Artificial Intelligence, Computer Vision, Edge Computing, Digital Twins, Human Computer Interaction, User Modelling, Robotics or Resilient Computing with potentials/achievements in informing space applications.

The post hoder will enjoy 1) a permanent (equivalent to US tenured) position at a top 100 university, 2) significantly reduced teaching, 3) a fully-funded PhD, 4) travel budget, 5) chance for a 2-year fully-funded Post-Doc.

Unmanned Aerial Vehicle Visual Detection and Tracking using Deep Neural Networks: A Performance Benchmark

Brian K. S. Isaac-Medina, Matthew Poyser, Daniel Organisciak, Chris G. Willcocks, Toby P. Breckon and Hubert P. H. Shum
Proceedings of the 2021 IEEE/CVF International Conference on Computer Vision Workshops (ICCVW), 2021

 H5-Index: 66# Citation: 49#

Unmanned Aerial Vehicle Visual Detection and Tracking using Deep Neural Networks: A Performance Benchmark
# According to Google Scholar 2023"

Abstract

Unmanned Aerial Vehicles (UAV) can pose a major risk for aviation safety, due to both negligent and malicious use. For this reason, the automated detection and tracking of UAV is a fundamental task in aerial security systems. Common technologies for UAV detection include visible-band and thermal infrared imaging, radio frequency and radar. Recent advances in deep neural networks (DNNs) for image-based object detection open the possibility to use visual information for this detection and tracking task. Furthermore, these detection architectures can be implemented as backbones for visual tracking systems, thereby enabling persistent tracking of UAV incursions. To date, no comprehensive performance benchmark exists that applies DNNs to visible-band imagery for UAV detection and tracking. To this end, three datasets with varied environmental conditions for UAV detection and tracking, comprising a total of 241 videos (331,486 images), are assessed using four detection architectures and three tracking frameworks. The best performing detector architecture obtains an mAP of 98.6% and the best performing tracking framework obtains a MOTA of 98.7%. Cross-modality evaluation is carried out between visible and infrared spectrums, achieving a maximal 82.8% mAP on visible images when training in the infrared modality. These results provide the first public multi-approach benchmark for state-of-the-art deep learning-based methods and give insight into which detection and tracking architectures are effective in the UAV domain.

Downloads

YouTube

Citations

BibTeX

@inproceedings{issacmedina21unmanned,
 author={Isaac-Medina, Brian K. S. and Poyser, Matthew and Organisciak, Daniel and Willcocks, Chris G. and Breckon, Toby P. and Shum, Hubert P. H.},
 booktitle={Proceedings of the 2021 IEEE/CVF International Conference on Computer Vision Workshops},
 series={ICCVW '21},
 title={Unmanned Aerial Vehicle Visual Detection and Tracking using Deep Neural Networks: A Performance Benchmark},
 year={2021},
 month={10},
 pages={1223--1232},
 numpages={10},
 doi={10.1109/ICCVW54120.2021.00142},
 publisher={IEEE/CVF},
}

RIS

TY  - CONF
AU  - Isaac-Medina, Brian K. S.
AU  - Poyser, Matthew
AU  - Organisciak, Daniel
AU  - Willcocks, Chris G.
AU  - Breckon, Toby P.
AU  - Shum, Hubert P. H.
T2  - Proceedings of the 2021 IEEE/CVF International Conference on Computer Vision Workshops
TI  - Unmanned Aerial Vehicle Visual Detection and Tracking using Deep Neural Networks: A Performance Benchmark
PY  - 2021
Y1  - 10 2021
SP  - 1223
EP  - 1232
DO  - 10.1109/ICCVW54120.2021.00142
PB  - IEEE/CVF
ER  - 

Plain Text

Brian K. S. Isaac-Medina, Matthew Poyser, Daniel Organisciak, Chris G. Willcocks, Toby P. Breckon and Hubert P. H. Shum, "Unmanned Aerial Vehicle Visual Detection and Tracking using Deep Neural Networks: A Performance Benchmark," in ICCVW '21: Proceedings of the 2021 IEEE/CVF International Conference on Computer Vision Workshops, pp. 1223-1232, IEEE/CVF, Oct 2021.

Supporting Grants

The Catapult Network (S-TRIG)
Tracking Drones Across Different Platforms with Machine Vision
Security Technology Research Innovation Grants Programme (S-TRIG) (Ref: 007CD): £32,727, Principal Investigator ()
Received from The Catapult Network (S-TRIG), UK, 2020-2021
Project Page
Northumbria University

Postgraduate Research Scholarship (Ref: ): £65,000, Principal Investigator ()
Received from Faculty of Engineering and Environment, Northumbria University, UK, 2018-2021
Project Page

Similar Research

Daniel Organisciak, Matthew Poyser, Aishah Alsehaim, Shanfeng Hu, Brian K. S. Isaac-Medina, Toby P. Breckon and Hubert P. H. Shum, "UAV-ReID: A Benchmark on Unmanned Aerial Vehicle Re-Identification in Video Imagery", Proceedings of the 2022 International Conference on Computer Vision Theory and Applications (VISAPP), 2022
Daniel Organisciak, Dimitrios Sakkos, Edmond S. L. Ho, Nauman Aslam and Hubert P. H. Shum, "Unifying Person and Vehicle Re-Identification", IEEE Access, 2020
Daniel Organisciak, Chirine Riachy, Nauman Aslam and Hubert P. H. Shum, "Triplet Loss with Channel Attention for Person Re-Identification", Journal of WSCG - Proceedings of the 2019 International Conferences in Central Europe on Computer Graphics, Visualization and Computer Vision (WSCG), 2019

 

 

Last updated on 17 February 2024
RSS Feed