Dr Daniel Organisciak

Northumbria University
PhD
, 2018 - 2022

Northumbria University
, United Kingdom
  • Research topic: Neural attention mechanisms for robust and interpretable feature representation learning
  • Funded by Northumbria University.

Downloads

Grants Involved

The Catapult Network (S-TRIG)
Tracking Drones Across Different Platforms with Machine Vision
Security Technology Research Innovation Grants Programme (S-TRIG) (Ref: 007CD): £32,727, Contributing Researcher (PI: Hubert P. H. Shum) ()
Received from The Catapult Network (S-TRIG), UK, 2020-2021
Project Page
Northumbria University

Postgraduate Research Scholarship (Ref: ): £65,000, PhD (PI: Hubert P. H. Shum) ()
Received from Faculty of Engineering and Environment, Northumbria University, UK, 2018-2021
Project Page

Publications with the Team

RobIn: A Robust Interpretable Deep Network for Schizophrenia Diagnosis
RobIn: A Robust Interpretable Deep Network for Schizophrenia Diagnosis  Impact Factor: 8.5 Top 25% Journal in Computer Science, Artificial Intelligence Citation: 10#
Expert Systems with Applications (ESWA), 2022
Daniel Organisciak, Hubert P. H. Shum, Ephraim Nwoye and Wai Lok Woo
Webpage Paper GitHub Cite BibTeX Cite RIS Cite Plain
UAV-ReID: A Benchmark on Unmanned Aerial Vehicle Re-Identification in Video Imagery
UAV-ReID: A Benchmark on Unmanned Aerial Vehicle Re-Identification in Video Imagery
Proceedings of the 2022 International Conference on Computer Vision Theory and Applications (VISAPP), 2022
Daniel Organisciak, Matthew Poyser, Aishah Alsehaim, Shanfeng Hu, Brian K. S. Isaac-Medina, Toby P. Breckon and Hubert P. H. Shum
Webpage Paper GitHub Cite BibTeX Cite RIS Cite Plain
Unmanned Aerial Vehicle Visual Detection and Tracking using Deep Neural Networks: A Performance Benchmark
Unmanned Aerial Vehicle Visual Detection and Tracking using Deep Neural Networks: A Performance Benchmark  H5-Index: 66# Citation: 58#
Proceedings of the 2021 IEEE/CVF International Conference on Computer Vision Workshops (ICCVW), 2021
Brian K. S. Isaac-Medina, Matthew Poyser, Daniel Organisciak, Chris G. Willcocks, Toby P. Breckon and Hubert P. H. Shum
Webpage Paper GitHub Cite BibTeX Cite RIS Cite Plain
Unifying Person and Vehicle Re-Identification
Unifying Person and Vehicle Re-Identification  Impact Factor: 3.9
IEEE Access, 2020
Daniel Organisciak, Dimitrios Sakkos, Edmond S. L. Ho, Nauman Aslam and Hubert P. H. Shum
Webpage Paper GitHub Cite BibTeX Cite RIS Cite Plain
Makeup Style Transfer on Low-Quality Images with Weighted Multi-Scale Attention
Makeup Style Transfer on Low-Quality Images with Weighted Multi-Scale Attention  H5-Index: 58# Citation: 11#
Proceedings of the 2020 International Conference on Pattern Recognition (ICPR), 2020
Daniel Organisciak, Edmond S. L. Ho and Hubert P. H. Shum
Webpage Paper Supplementary Material GitHub YouTube Presentation Slides Cite BibTeX Cite RIS Cite Plain
Triplet Loss with Channel Attention for Person Re-Identification
Triplet Loss with Channel Attention for Person Re-Identification  Citation: 11#
Journal of WSCG - Proceedings of the 2019 International Conferences in Central Europe on Computer Graphics, Visualization and Computer Vision (WSCG), 2019
Daniel Organisciak, Chirine Riachy, Nauman Aslam and Hubert P. H. Shum
Webpage Paper Cite BibTeX Cite RIS Cite Plain
Saliency-Informed Spatio-Temporal Vector of Locally Aggregated Descriptors and Fisher Vectors for Visual Action Recognition
Saliency-Informed Spatio-Temporal Vector of Locally Aggregated Descriptors and Fisher Vectors for Visual Action Recognition
Proceedings of the 2018 British Machine Vision Conference Workshop on Image Analysis for Human Facial and Activity Recognition (IAHFAR), 2018
Zheming Zuo, Daniel Organisciak, Hubert P. H. Shum and Longzhi Yang
Webpage Paper Cite BibTeX Cite RIS Cite Plain

Links

Webpage
Webpage
Google Scholar
Google Scholar
ResearchGate
ResearshGate
ORCID
ORCID
DBLP
DBLP

 

 

Last updated on 14 April 2024
RSS Feed