Dr Daniel Organisciak

Northumbria University
PhD (Co-supervised with )
, 2018 - 2022

Northumbria University
, United Kingdom
  • Research topic: Neural attention mechanisms for robust and interpretable feature representation learning
  • Funded by Northumbria University.

Downloads


Grants Involved

The Catapult Network (S-TRIG)
Tracking Drones Across Different Platforms with Machine Vision
Security Technology Research Innovation Grants Programme (S-TRIG) (Ref: 007CD): £32,727, Contributing Researcher (PI: Hubert P. H. Shum) ()
Received from The Catapult Network (S-TRIG), UK, 2020-2021
Project Page
Northumbria University

Postgraduate Research Scholarship (Ref: ): £65,000, PhD (PI: Hubert P. H. Shum) ()
Received from Faculty of Engineering and Environment, Northumbria University, UK, 2018-2021
Project Page

Publications with the Team

RobIn: A Robust Interpretable Deep Network for Schizophrenia Diagnosis
RobIn: A Robust Interpretable Deep Network for Schizophrenia Diagnosis  Impact Factor: 7.5 Top 25% Journal in Computer Science, Artificial Intelligence Citation: 13#
Expert Systems with Applications (ESWA), 2022
Daniel Organisciak, Hubert P. H. Shum, Ephraim Nwoye and Wai Lok Woo
Webpage Cite This Paper GitHub
UAV-ReID: A Benchmark on Unmanned Aerial Vehicle Re-Identification in Video Imagery
UAV-ReID: A Benchmark on Unmanned Aerial Vehicle Re-Identification in Video Imagery
Proceedings of the 2022 International Conference on Computer Vision Theory and Applications (VISAPP), 2022
Daniel Organisciak, Matthew Poyser, Aishah Alsehaim, Shanfeng Hu, Brian K. S. Isaac-Medina, Toby P. Breckon and Hubert P. H. Shum
Webpage Cite This Paper GitHub
Unmanned Aerial Vehicle Visual Detection and Tracking using Deep Neural Networks: A Performance Benchmark
Unmanned Aerial Vehicle Visual Detection and Tracking using Deep Neural Networks: A Performance Benchmark  H5-Index: 80# Citation: 75#
Proceedings of the 2021 IEEE/CVF International Conference on Computer Vision Workshops (ICCVW), 2021
Brian K. S. Isaac-Medina, Matthew Poyser, Daniel Organisciak, Chris G. Willcocks, Toby P. Breckon and Hubert P. H. Shum
Webpage Cite This Paper GitHub
Unifying Person and Vehicle Re-Identification
Unifying Person and Vehicle Re-Identification  Impact Factor: 3.4 Citation: 12#
IEEE Access, 2020
Daniel Organisciak, Dimitrios Sakkos, Edmond S. L. Ho, Nauman Aslam and Hubert P. H. Shum
Webpage Cite This Paper GitHub
Makeup Style Transfer on Low-Quality Images with Weighted Multi-Scale Attention
Makeup Style Transfer on Low-Quality Images with Weighted Multi-Scale Attention  H5-Index: 56# Citation: 13#
Proceedings of the 2020 International Conference on Pattern Recognition (ICPR), 2020
Daniel Organisciak, Edmond S. L. Ho and Hubert P. H. Shum
Webpage Cite This Paper Supplementary Material GitHub YouTube Presentation Slides
Triplet Loss with Channel Attention for Person Re-Identification
Triplet Loss with Channel Attention for Person Re-Identification  Citation: 12#
Journal of WSCG - Proceedings of the 2019 International Conferences in Central Europe on Computer Graphics, Visualization and Computer Vision (WSCG), 2019
Daniel Organisciak, Chirine Riachy, Nauman Aslam and Hubert P. H. Shum
Webpage Cite This Paper
Saliency-Informed Spatio-Temporal Vector of Locally Aggregated Descriptors and Fisher Vectors for Visual Action Recognition
Saliency-Informed Spatio-Temporal Vector of Locally Aggregated Descriptors and Fisher Vectors for Visual Action Recognition
Proceedings of the 2018 British Machine Vision Conference Workshop on Image Analysis for Human Facial and Activity Recognition (IAHFAR), 2018
Zheming Zuo, Daniel Organisciak, Hubert P. H. Shum and Longzhi Yang
Webpage Cite This Paper

Links

Webpage
Webpage
Google Scholar
Google Scholar
ResearchGate
ResearshGate

 

Last updated on 7 September 2024
RSS Feed