Postgraduate Research Scholarship

Postgraduate Research Scholarship
Funding Source: Faculty of Engineering and Environment, Northumbria University, UK
Reference Number:
Value: £65,000
2018 - 2021

About the Project


People



Organizations


    YouTube


    Publications

    RobIn: A Robust Interpretable Deep Network for Schizophrenia Diagnosis
    RobIn: A Robust Interpretable Deep Network for Schizophrenia Diagnosis Impact Factor: 7.5Top 25% Journal in Computer Science, Artificial IntelligenceCitation: 26#
    Expert Systems with Applications (ESWA), 2022
    Daniel Organisciak, Hubert P. H. Shum, Ephraim Nwoye and Wai Lok Woo
    Webpage Cite This Plain Text
    Daniel Organisciak, Hubert P. H. Shum, Ephraim Nwoye and Wai Lok Woo, "RobIn: A Robust Interpretable Deep Network for Schizophrenia Diagnosis," Expert Systems with Applications, vol. 201, pp. 117158, Elsevier, 2022.
    Bibtex
    @article{organisciak22robin,
     author={Organisciak, Daniel and Shum, Hubert P. H. and Nwoye, Ephraim and Woo, Wai Lok},
     journal={Expert Systems with Applications},
     title={RobIn: A Robust Interpretable Deep Network for Schizophrenia Diagnosis},
     year={2022},
     volume={201},
     pages={117158},
     numpages={12},
     doi={10.1016/j.eswa.2022.117158},
     issn={0957-4174},
     publisher={Elsevier},
    }
    RIS
    TY  - JOUR
    AU  - Organisciak, Daniel
    AU  - Shum, Hubert P. H.
    AU  - Nwoye, Ephraim
    AU  - Woo, Wai Lok
    T2  - Expert Systems with Applications
    TI  - RobIn: A Robust Interpretable Deep Network for Schizophrenia Diagnosis
    PY  - 2022
    VL  - 201
    SP  - 117158
    EP  - 117158
    DO  - 10.1016/j.eswa.2022.117158
    SN  - 0957-4174
    PB  - Elsevier
    ER  - 
    Paper
    UAV-ReID: A Benchmark on Unmanned Aerial Vehicle Re-Identification in Video Imagery
    UAV-ReID: A Benchmark on Unmanned Aerial Vehicle Re-Identification in Video Imagery
    Proceedings of the 2022 International Conference on Computer Vision Theory and Applications (VISAPP), 2022
    Daniel Organisciak, Matthew Poyser, Aishah Alsehaim, Shanfeng Hu, Brian K. S. Isaac-Medina, Toby P. Breckon and Hubert P. H. Shum
    Webpage Cite This Plain Text
    Daniel Organisciak, Matthew Poyser, Aishah Alsehaim, Shanfeng Hu, Brian K. S. Isaac-Medina, Toby P. Breckon and Hubert P. H. Shum, "UAV-ReID: A Benchmark on Unmanned Aerial Vehicle Re-Identification in Video Imagery," in VISAPP '22: Proceedings of the 2022 International Conference on Computer Vision Theory and Applications, pp. 136-146, SciTePress, Feb 2022.
    Bibtex
    @inproceedings{organisciak22uavreid,
     author={Organisciak, Daniel and Poyser, Matthew and Alsehaim, Aishah and Hu, Shanfeng and Isaac-Medina, Brian K. S. and Breckon, Toby P. and Shum, Hubert P. H.},
     booktitle={Proceedings of the 2022 International Conference on Computer Vision Theory and Applications},
     series={VISAPP '22},
     title={UAV-ReID: A Benchmark on Unmanned Aerial Vehicle Re-Identification in Video Imagery},
     year={2022},
     month={2},
     pages={136--146},
     numpages={11},
     doi={10.5220/0010836600003124},
     isbn={978-989-758-555-5},
     publisher={SciTePress},
    }
    RIS
    TY  - CONF
    AU  - Organisciak, Daniel
    AU  - Poyser, Matthew
    AU  - Alsehaim, Aishah
    AU  - Hu, Shanfeng
    AU  - Isaac-Medina, Brian K. S.
    AU  - Breckon, Toby P.
    AU  - Shum, Hubert P. H.
    T2  - Proceedings of the 2022 International Conference on Computer Vision Theory and Applications
    TI  - UAV-ReID: A Benchmark on Unmanned Aerial Vehicle Re-Identification in Video Imagery
    PY  - 2022
    Y1  - 2 2022
    SP  - 136
    EP  - 146
    DO  - 10.5220/0010836600003124
    SN  - 978-989-758-555-5
    PB  - SciTePress
    ER  - 
    Paper GitHub
    Unmanned Aerial Vehicle Visual Detection and Tracking using Deep Neural Networks: A Performance Benchmark
    Unmanned Aerial Vehicle Visual Detection and Tracking using Deep Neural Networks: A Performance Benchmark H5-Index: 80#Citation: 94#
    Proceedings of the 2021 IEEE/CVF International Conference on Computer Vision Workshops (ICCVW), 2021
    Brian K. S. Isaac-Medina, Matthew Poyser, Daniel Organisciak, Chris G. Willcocks, Toby P. Breckon and Hubert P. H. Shum
    Webpage Cite This Plain Text
    Brian K. S. Isaac-Medina, Matthew Poyser, Daniel Organisciak, Chris G. Willcocks, Toby P. Breckon and Hubert P. H. Shum, "Unmanned Aerial Vehicle Visual Detection and Tracking using Deep Neural Networks: A Performance Benchmark," in ICCVW '21: Proceedings of the 2021 IEEE/CVF International Conference on Computer Vision Workshops, pp. 1223-1232, IEEE/CVF, Oct 2021.
    Bibtex
    @inproceedings{issacmedina21unmanned,
     author={Isaac-Medina, Brian K. S. and Poyser, Matthew and Organisciak, Daniel and Willcocks, Chris G. and Breckon, Toby P. and Shum, Hubert P. H.},
     booktitle={Proceedings of the 2021 IEEE/CVF International Conference on Computer Vision Workshops},
     series={ICCVW '21},
     title={Unmanned Aerial Vehicle Visual Detection and Tracking using Deep Neural Networks: A Performance Benchmark},
     year={2021},
     month={10},
     pages={1223--1232},
     numpages={10},
     doi={10.1109/ICCVW54120.2021.00142},
     publisher={IEEE/CVF},
    }
    RIS
    TY  - CONF
    AU  - Isaac-Medina, Brian K. S.
    AU  - Poyser, Matthew
    AU  - Organisciak, Daniel
    AU  - Willcocks, Chris G.
    AU  - Breckon, Toby P.
    AU  - Shum, Hubert P. H.
    T2  - Proceedings of the 2021 IEEE/CVF International Conference on Computer Vision Workshops
    TI  - Unmanned Aerial Vehicle Visual Detection and Tracking using Deep Neural Networks: A Performance Benchmark
    PY  - 2021
    Y1  - 10 2021
    SP  - 1223
    EP  - 1232
    DO  - 10.1109/ICCVW54120.2021.00142
    PB  - IEEE/CVF
    ER  - 
    Paper GitHub
    Unifying Person and Vehicle Re-Identification
    Unifying Person and Vehicle Re-Identification Impact Factor: 3.4Citation: 14#
    IEEE Access, 2020
    Daniel Organisciak, Dimitrios Sakkos, Edmond S. L. Ho, Nauman Aslam and Hubert P. H. Shum
    Webpage Cite This Plain Text
    Daniel Organisciak, Dimitrios Sakkos, Edmond S. L. Ho, Nauman Aslam and Hubert P. H. Shum, "Unifying Person and Vehicle Re-Identification," IEEE Access, vol. 8, pp. 115673-115684, IEEE, 2020.
    Bibtex
    @article{daniel20unifying,
     author={Organisciak, Daniel and Sakkos, Dimitrios and Ho, Edmond S. L. and Aslam, Nauman and Shum, Hubert P. H.},
     journal={IEEE Access},
     title={Unifying Person and Vehicle Re-Identification},
     year={2020},
     volume={8},
     pages={115673--115684},
     numpages={12},
     doi={10.1109/ACCESS.2020.3004092},
     issn={2169-3536},
     publisher={IEEE},
    }
    RIS
    TY  - JOUR
    AU  - Organisciak, Daniel
    AU  - Sakkos, Dimitrios
    AU  - Ho, Edmond S. L.
    AU  - Aslam, Nauman
    AU  - Shum, Hubert P. H.
    T2  - IEEE Access
    TI  - Unifying Person and Vehicle Re-Identification
    PY  - 2020
    VL  - 8
    SP  - 115673
    EP  - 115684
    DO  - 10.1109/ACCESS.2020.3004092
    SN  - 2169-3536
    PB  - IEEE
    ER  - 
    Paper GitHub
    Makeup Style Transfer on Low-Quality Images with Weighted Multi-Scale Attention
    Makeup Style Transfer on Low-Quality Images with Weighted Multi-Scale Attention H5-Index: 56#Citation: 13#
    Proceedings of the 2020 International Conference on Pattern Recognition (ICPR), 2020
    Daniel Organisciak, Edmond S. L. Ho and Hubert P. H. Shum
    Webpage Cite This Plain Text
    Daniel Organisciak, Edmond S. L. Ho and Hubert P. H. Shum, "Makeup Style Transfer on Low-Quality Images with Weighted Multi-Scale Attention," in ICPR '20: Proceedings of the 2020 International Conference on Pattern Recognition, pp. 6011-6018, Milan, Italy, Jan 2020.
    Bibtex
    @inproceedings{organisciak20makeup,
     author={Organisciak, Daniel and Ho, Edmond S. L. and Shum, Hubert P. H.},
     booktitle={Proceedings of the 2020 International Conference on Pattern Recognition},
     series={ICPR '20},
     title={Makeup Style Transfer on Low-Quality Images with Weighted Multi-Scale Attention},
     year={2020},
     month={1},
     pages={6011--6018},
     numpages={8},
     doi={10.1109/ICPR48806.2021.9412604},
     location={Milan, Italy},
    }
    RIS
    TY  - CONF
    AU  - Organisciak, Daniel
    AU  - Ho, Edmond S. L.
    AU  - Shum, Hubert P. H.
    T2  - Proceedings of the 2020 International Conference on Pattern Recognition
    TI  - Makeup Style Transfer on Low-Quality Images with Weighted Multi-Scale Attention
    PY  - 2020
    Y1  - 1 2020
    SP  - 6011
    EP  - 6018
    DO  - 10.1109/ICPR48806.2021.9412604
    ER  - 
    Paper Supplementary Material GitHub YouTube
    Triplet Loss with Channel Attention for Person Re-Identification
    Triplet Loss with Channel Attention for Person Re-Identification Citation: 12#
    Journal of WSCG - Proceedings of the 2019 International Conferences in Central Europe on Computer Graphics, Visualization and Computer Vision (WSCG), 2019
    Daniel Organisciak, Chirine Riachy, Nauman Aslam and Hubert P. H. Shum
    Webpage Cite This Plain Text
    Daniel Organisciak, Chirine Riachy, Nauman Aslam and Hubert P. H. Shum, "Triplet Loss with Channel Attention for Person Re-Identification," Journal of WSCG, vol. 27, no. 2, pp. 161-169, Plzen, Czech Republic, 2019.
    Bibtex
    @article{organisciak19triplet,
     author={Organisciak, Daniel and Riachy, Chirine and Aslam, Nauman and Shum, Hubert P. H.},
     journal={Journal of WSCG},
     title={Triplet Loss with Channel Attention for Person Re-Identification},
     year={2019},
     volume={27},
     number={2},
     pages={161--169},
     numpages={9},
     doi={10.24132/JWSCG.2019.27.2.9},
     issn={1213-6972},
     location={Plzen, Czech Republic},
    }
    RIS
    TY  - JOUR
    AU  - Organisciak, Daniel
    AU  - Riachy, Chirine
    AU  - Aslam, Nauman
    AU  - Shum, Hubert P. H.
    T2  - Journal of WSCG
    TI  - Triplet Loss with Channel Attention for Person Re-Identification
    PY  - 2019
    VL  - 27
    IS  - 2
    SP  - 161
    EP  - 169
    DO  - 10.24132/JWSCG.2019.27.2.9
    SN  - 1213-6972
    ER  - 
    Paper
    Saliency-Informed Spatio-Temporal Vector of Locally Aggregated Descriptors and Fisher Vectors for Visual Action Recognition
    Saliency-Informed Spatio-Temporal Vector of Locally Aggregated Descriptors and Fisher Vectors for Visual Action Recognition
    Proceedings of the 2018 British Machine Vision Conference Workshop on Image Analysis for Human Facial and Activity Recognition (IAHFAR), 2018
    Zheming Zuo, Daniel Organisciak, Hubert P. H. Shum and Longzhi Yang
    Webpage Cite This Plain Text
    Zheming Zuo, Daniel Organisciak, Hubert P. H. Shum and Longzhi Yang, "Saliency-Informed Spatio-Temporal Vector of Locally Aggregated Descriptors and Fisher Vectors for Visual Action Recognition," in IAHFAR '18: Proceedings of the 2018 British Machine Vision Conference Workshop on Image Analysis for Human Facial and Activity Recognition, Newcastle upon Tyne, UK, Sep 2018.
    Bibtex
    @inproceedings{zuo18saliency,
     author={Zuo, Zheming and Organisciak, Daniel and Shum, Hubert P. H. and Yang, Longzhi},
     booktitle={Proceedings of the 2018 British Machine Vision Conference Workshop on Image Analysis for Human Facial and Activity Recognition},
     series={IAHFAR '18},
     title={Saliency-Informed Spatio-Temporal Vector of Locally Aggregated Descriptors and Fisher Vectors for Visual Action Recognition},
     year={2018},
     month={9},
     numpages={11},
     location={Newcastle upon Tyne, UK},
    }
    RIS
    TY  - CONF
    AU  - Zuo, Zheming
    AU  - Organisciak, Daniel
    AU  - Shum, Hubert P. H.
    AU  - Yang, Longzhi
    T2  - Proceedings of the 2018 British Machine Vision Conference Workshop on Image Analysis for Human Facial and Activity Recognition
    TI  - Saliency-Informed Spatio-Temporal Vector of Locally Aggregated Descriptors and Fisher Vectors for Visual Action Recognition
    PY  - 2018
    Y1  - 9 2018
    ER  - 
    Paper

    Links


      HomeGoogle ScholarYouTubeLinkedInTwitter/XGitHubORCIDResearchGateEmail
       
      Print