Self-funded PhD Positions Available

Biomedical Engineering with Deep Learning based Video Analysis
Computer Vision with Deep Learning for Human Data Modelling
Deep Learning based Computer Graphics for Creating Virtual Characters

Dr
Daniel Organisciak

Northumbria University
PhD
, 2018 - 2022

Northumbria University
, United Kingdom
  • Research topic: Neural attention mechanisms for robust and interpretable feature representation learning
  • Funded by Northumbria University

Downloads

Funding Participated

The Catapult Network (S-TRIG)
Security Technology Research Innovation Grants Programme (S-TRIG) (Ref: 007CD): £29,500, Contributing Researcher (PI: Hubert P. H. Shum) ()
Project Title: Tracking Drones Across Different Platforms with Machine Vision

Received from The Catapult Network (S-TRIG), UK, 2020-2021
Project Page
Northumbria University
Postgraduate Research Scholarship (Ref: ): £65,000, PhD (PI: Hubert P. H. Shum) ()
Project Title:

Received from Faculty of Engineering and Environment, Northumbria University, UK, 2018-2021
Project Page

Publications with the Team

RobIn: A Robust Interpretable Deep Network for Schizophrenia Diagnosis
RobIn: A Robust Interpretable Deep Network for Schizophrenia Diagnosis Impact Factor: 6.954#
Expert Systems with Applications (ESWA), 2022
Daniel Organisciak, Hubert P. H. Shum, Ephraim Nwoye and Wai Lok Woo
Webpage Paper Cite BibTeX Cite RIS Cite Plain
UAV-ReID: A Benchmark on Unmanned Aerial Vehicle Re-identification in Video Imagery
UAV-ReID: A Benchmark on Unmanned Aerial Vehicle Re-identification in Video Imagery Citation: 1##
Proceedings of the 2022 International Conference on Computer Vision Theory and Applications (VISAPP), 2022
Daniel Organisciak, Matthew Poyser, Aishah Alsehaim, Shanfeng Hu, Brian K. S. Isaac-Medina, Toby P. Breckon and Hubert P. H. Shum
Webpage Paper Github Cite BibTeX Cite RIS Cite Plain
Unmanned Aerial Vehicle Visual Detection and Tracking using Deep Neural Networks: A Performance Benchmark
Unmanned Aerial Vehicle Visual Detection and Tracking using Deep Neural Networks: A Performance Benchmark Citation: 10##
Proceedings of the 2021 International Conference on Computer Vision Workshop on Anti-UAV (ICCVW), 2021
Brian K. S. Isaac-Medina, Matthew Poyser, Daniel Organisciak, Chris G. Willcocks, Toby P. Breckon and Hubert P. H. Shum
Webpage Paper Github Cite BibTeX Cite RIS Cite Plain
Unifying Person and Vehicle Re-identification
Unifying Person and Vehicle Re-identification Impact Factor: 3.367# Citation: 5##
IEEE Access, 2020
Daniel Organisciak, Dimitrios Sakkos, Edmond S. L. Ho, Nauman Aslam and Hubert P. H. Shum
Webpage Paper Cite BibTeX Cite RIS Cite Plain
Makeup Style Transfer on Low-quality Images with Weighted Multi-scale Attention
Makeup Style Transfer on Low-quality Images with Weighted Multi-scale Attention Citation: 3##
Proceedings of the 2020 International Conference on Pattern Recognition (ICPR), 2020
Daniel Organisciak, Edmond S. L. Ho and Hubert P. H. Shum
Webpage Paper Github YouTube Presentation Slides Supplementary Materials Cite BibTeX Cite RIS Cite Plain
Triplet Loss with Channel Attention for Person Re-identification
Triplet Loss with Channel Attention for Person Re-identification Citation: 6##
Journal of WSCG - Proceedings of the 2019 International Conferences in Central Europe on Computer Graphics, Visualization and Computer Vision (WSCG), 2019
Daniel Organisciak, Chirine Riachy, Nauman Aslam and Hubert P. H. Shum
Webpage Paper Cite BibTeX Cite RIS Cite Plain
Saliency-Informed Spatio-Temporal Vector of Locally Aggregated Descriptors and Fisher Vectors for Visual Action Recognition
Saliency-Informed Spatio-Temporal Vector of Locally Aggregated Descriptors and Fisher Vectors for Visual Action Recognition Citation: 4##
Proceedings of the 2018 British Machine Vision Conference Workshop on Image Analysis for Human Facial and Activity Recognition (IAHFAR), 2018
Zheming Zuo, Daniel Organisciak, Hubert P. H. Shum and Longzhi Yang
Webpage Paper Cite BibTeX Cite RIS Cite Plain

Links

Webpage
Webpage
Google Scholar
Google Scholar
ResearchGate
ResearshGate
ORCID
ORCID
DBLP
DBLP

 

 
 

Last updated on 01 July 2022, RSS Feeds