Research Publications - Datasets

Select a Topic:​ All Motion Analysis Character Animation Interaction Modelling Video Analysis Action Recognition 3D Reconstruction Healthcare Diagnosis Crowd Modelling Environment Sensing Virtual Reality Responsible AI Surface Modelling Face Modelling Robotics Artwork Analysis Hands and Gestures Surveillance Cybersecurity Air and Space

Sort By:​YearTypeCitationImpact Factor


We construct benchmark datasets that facilitate research development in computer vision, computer graphics and biomedical engineering.

Insterested in our research? Consider joining us.

Impact Factor 10.0+

Action Recognition from Arbitrary Views Using Transferable Dictionary Learning
Action Recognition from Arbitrary Views Using Transferable Dictionary Learning REF 2021 Submitted OutputImpact Factor: 10.8Top 10% Journal in Computer Science, Artificial IntelligenceCitation: 68#
IEEE Transactions on Image Processing (TIP), 2018
Jingtian Zhang, Hubert P. H. Shum, Jungong Han and Ling Shao
Webpage Cite This Plain Text
Jingtian Zhang, Hubert P. H. Shum, Jungong Han and Ling Shao, "Action Recognition from Arbitrary Views Using Transferable Dictionary Learning," IEEE Transactions on Image Processing, vol. 27, no. 10, pp. 4709-4723, IEEE, 2018.
Bibtex
@article{zhang18arbitrary,
 author={Zhang, Jingtian and Shum, Hubert P. H. and Han, Jungong and Shao, Ling},
 journal={IEEE Transactions on Image Processing},
 title={Action Recognition from Arbitrary Views Using Transferable Dictionary Learning},
 year={2018},
 volume={27},
 number={10},
 pages={4709--4723},
 numpages={15},
 doi={10.1109/TIP.2018.2836323},
 issn={1057-7149},
 publisher={IEEE},
}
RIS
TY  - JOUR
AU  - Zhang, Jingtian
AU  - Shum, Hubert P. H.
AU  - Han, Jungong
AU  - Shao, Ling
T2  - IEEE Transactions on Image Processing
TI  - Action Recognition from Arbitrary Views Using Transferable Dictionary Learning
PY  - 2018
VL  - 27
IS  - 10
SP  - 4709
EP  - 4723
DO  - 10.1109/TIP.2018.2836323
SN  - 1057-7149
PB  - IEEE
ER  - 
Paper Dataset

Impact Factor 5.0+

Sparse Metric-Based Mesh Saliency
Sparse Metric-Based Mesh Saliency Impact Factor: 5.5Top 25% Journal in Computer Science, Artificial IntelligenceCitation: 11#
Neurocomputing, 2020
Shanfeng Hu, Xiaohui Liang, Hubert P. H. Shum, Frederick W. B. Li and Nauman Aslam
Webpage Cite This Plain Text
Shanfeng Hu, Xiaohui Liang, Hubert P. H. Shum, Frederick W. B. Li and Nauman Aslam, "Sparse Metric-Based Mesh Saliency," Neurocomputing, vol. 400, pp. 11-23, Elsevier, 2020.
Bibtex
@article{hu20sparse,
 author={Hu, Shanfeng and Liang, Xiaohui and Shum, Hubert P. H. and Li, Frederick W. B. and Aslam, Nauman},
 journal={Neurocomputing},
 title={Sparse Metric-Based Mesh Saliency},
 year={2020},
 volume={400},
 pages={11--23},
 numpages={13},
 doi={10.1016/j.neucom.2020.02.106},
 issn={0925-2312},
 publisher={Elsevier},
}
RIS
TY  - JOUR
AU  - Hu, Shanfeng
AU  - Liang, Xiaohui
AU  - Shum, Hubert P. H.
AU  - Li, Frederick W. B.
AU  - Aslam, Nauman
T2  - Neurocomputing
TI  - Sparse Metric-Based Mesh Saliency
PY  - 2020
VL  - 400
SP  - 11
EP  - 23
DO  - 10.1016/j.neucom.2020.02.106
SN  - 0925-2312
PB  - Elsevier
ER  - 
Paper Dataset GitHub

Impact Factor 3.0+

Automatic Musculoskeletal and Neurological Disorder Diagnosis with Relative Joint Displacement from Human Gait
Automatic Musculoskeletal and Neurological Disorder Diagnosis with Relative Joint Displacement from Human Gait REF 2021 Submitted OutputImpact Factor: 4.8Citation: 28#
IEEE Transactions on Neural Systems and Rehabilitation Engineering (TNSRE), 2018
Worasak Rueangsirarak, Jingtian Zhang, Nauman Aslam and Hubert P. H. Shum
Webpage Cite This Plain Text
Worasak Rueangsirarak, Jingtian Zhang, Nauman Aslam and Hubert P. H. Shum, "Automatic Musculoskeletal and Neurological Disorder Diagnosis with Relative Joint Displacement from Human Gait," IEEE Transactions on Neural Systems and Rehabilitation Engineering, vol. 26, no. 12, pp. 2387-2396, IEEE, 2018.
Bibtex
@article{rueangsirarak18automatic,
 author={Rueangsirarak, Worasak and Zhang, Jingtian and Aslam, Nauman and Shum, Hubert P. H.},
 journal={IEEE Transactions on Neural Systems and Rehabilitation Engineering},
 title={Automatic Musculoskeletal and Neurological Disorder Diagnosis with Relative Joint Displacement from Human Gait},
 year={2018},
 volume={26},
 number={12},
 pages={2387--2396},
 numpages={10},
 doi={10.1109/TNSRE.2018.2880871},
 issn={1534-4320},
 publisher={IEEE},
}
RIS
TY  - JOUR
AU  - Rueangsirarak, Worasak
AU  - Zhang, Jingtian
AU  - Aslam, Nauman
AU  - Shum, Hubert P. H.
T2  - IEEE Transactions on Neural Systems and Rehabilitation Engineering
TI  - Automatic Musculoskeletal and Neurological Disorder Diagnosis with Relative Joint Displacement from Human Gait
PY  - 2018
VL  - 26
IS  - 12
SP  - 2387
EP  - 2396
DO  - 10.1109/TNSRE.2018.2880871
SN  - 1534-4320
PB  - IEEE
ER  - 
Paper Dataset GitHub
Interaction-Based Human Activity Comparison
Interaction-Based Human Activity Comparison REF 2021 Submitted OutputImpact Factor: 4.7Top 25% Journal in Computer Science, Software EngineeringCitation: 29#
IEEE Transactions on Visualization and Computer Graphics (TVCG), 2020
Yijun Shen, Longzhi Yang, Edmond S. L. Ho and Hubert P. H. Shum
Webpage Cite This Plain Text
Yijun Shen, Longzhi Yang, Edmond S. L. Ho and Hubert P. H. Shum, "Interaction-Based Human Activity Comparison," IEEE Transactions on Visualization and Computer Graphics, vol. 26, no. 8, pp. 115673-115684, IEEE, 2020.
Bibtex
@article{shen20interaction,
 author={Shen, Yijun and Yang, Longzhi and Ho, Edmond S. L. and Shum, Hubert P. H.},
 journal={IEEE Transactions on Visualization and Computer Graphics},
 title={Interaction-Based Human Activity Comparison},
 year={2020},
 volume={26},
 number={8},
 pages={115673--115684},
 numpages={14},
 doi={10.1109/TVCG.2019.2893247},
 publisher={IEEE},
}
RIS
TY  - JOUR
AU  - Shen, Yijun
AU  - Yang, Longzhi
AU  - Ho, Edmond S. L.
AU  - Shum, Hubert P. H.
T2  - IEEE Transactions on Visualization and Computer Graphics
TI  - Interaction-Based Human Activity Comparison
PY  - 2020
VL  - 26
IS  - 8
SP  - 115673
EP  - 115684
DO  - 10.1109/TVCG.2019.2893247
PB  - IEEE
ER  - 
Paper Dataset GitHub YouTube
Abnormal Infant Movements Classification with Deep Learning on Pose-Based Features
Abnormal Infant Movements Classification with Deep Learning on Pose-Based Features REF 2021 Submitted OutputImpact Factor: 3.4Citation: 80#
IEEE Access, 2020
Kevin D. McCay, Edmond S. L. Ho, Hubert P. H. Shum, Gerhard Fehringer, Claire Marcroft and Nicholas Embleton
Webpage Cite This Plain Text
Kevin D. McCay, Edmond S. L. Ho, Hubert P. H. Shum, Gerhard Fehringer, Claire Marcroft and Nicholas Embleton, "Abnormal Infant Movements Classification with Deep Learning on Pose-Based Features," IEEE Access, vol. 8, no. 1, pp. 51582-51592, IEEE, 2020.
Bibtex
@article{mccay20abnormal,
 author={McCay, Kevin D. and Ho, Edmond S. L. and Shum, Hubert P. H. and Fehringer, Gerhard and Marcroft, Claire and Embleton, Nicholas},
 journal={IEEE Access},
 title={Abnormal Infant Movements Classification with Deep Learning on Pose-Based Features},
 year={2020},
 volume={8},
 number={1},
 pages={51582--51592},
 numpages={11},
 doi={10.1109/ACCESS.2020.2980269},
 issn={2169-3536},
 publisher={IEEE},
}
RIS
TY  - JOUR
AU  - McCay, Kevin D.
AU  - Ho, Edmond S. L.
AU  - Shum, Hubert P. H.
AU  - Fehringer, Gerhard
AU  - Marcroft, Claire
AU  - Embleton, Nicholas
T2  - IEEE Access
TI  - Abnormal Infant Movements Classification with Deep Learning on Pose-Based Features
PY  - 2020
VL  - 8
IS  - 1
SP  - 51582
EP  - 51592
DO  - 10.1109/ACCESS.2020.2980269
SN  - 2169-3536
PB  - IEEE
ER  - 
Paper GitHub
Advancing Healthcare Practice and Education via Data Sharing: Demonstrating the Utility of Open Data by Training an Artificial Intelligence Model to Assess Cardiopulmonary Resuscitation Skills
Advancing Healthcare Practice and Education via Data Sharing: Demonstrating the Utility of Open Data by Training an Artificial Intelligence Model to Assess Cardiopulmonary Resuscitation Skills Impact Factor: 3.0Top 25% Journal in Education & Educational Research
Advances in Health Sciences Education (AHSE), 2024
Merryn D. Constable, Francis Xiatian Zhang, Tony Conner, Daniel Monk, Jason Rajsic, Claire Ford, Laura Jillian Park, Alan Platt, Debra Porteous, Lawrence Grierson and Hubert P. H. Shum
Webpage Cite This Plain Text
Merryn D. Constable, Francis Xiatian Zhang, Tony Conner, Daniel Monk, Jason Rajsic, Claire Ford, Laura Jillian Park, Alan Platt, Debra Porteous, Lawrence Grierson and Hubert P. H. Shum, "Advancing Healthcare Practice and Education via Data Sharing: Demonstrating the Utility of Open Data by Training an Artificial Intelligence Model to Assess Cardiopulmonary Resuscitation Skills," Advances in Health Sciences Education, Springer, 2024.
Bibtex
@article{constable24advancing,
 author={Constable, Merryn D. and Zhang, Francis Xiatian and Conner, Tony and Monk, Daniel and Rajsic, Jason and Ford, Claire and Park, Laura Jillian and Platt, Alan and Porteous, Debra and Grierson, Lawrence and Shum, Hubert P. H.},
 journal={Advances in Health Sciences Education},
 title={Advancing Healthcare Practice and Education via Data Sharing: Demonstrating the Utility of Open Data by Training an Artificial Intelligence Model to Assess Cardiopulmonary Resuscitation Skills},
 year={2024},
 doi={10.1007/s10459-024-10369-5},
 issn={1573-1677},
 publisher={Springer},
}
RIS
TY  - JOUR
AU  - Constable, Merryn D.
AU  - Zhang, Francis Xiatian
AU  - Conner, Tony
AU  - Monk, Daniel
AU  - Rajsic, Jason
AU  - Ford, Claire
AU  - Park, Laura Jillian
AU  - Platt, Alan
AU  - Porteous, Debra
AU  - Grierson, Lawrence
AU  - Shum, Hubert P. H.
T2  - Advances in Health Sciences Education
TI  - Advancing Healthcare Practice and Education via Data Sharing: Demonstrating the Utility of Open Data by Training an Artificial Intelligence Model to Assess Cardiopulmonary Resuscitation Skills
PY  - 2024
DO  - 10.1007/s10459-024-10369-5
SN  - 1573-1677
PB  - Springer
ER  - 
Paper Dataset

Impact Factor 0.0+

Geometric Features Informed Multi-Person Human-Object Interaction Recognition in Videos
Geometric Features Informed Multi-Person Human-Object Interaction Recognition in Videos H5-Index: 206#Core A* ConferenceCitation: 18#
Proceedings of the 2022 European Conference on Computer Vision (ECCV), 2022
Tanqiu Qiao, Qianhui Men, Frederick W. B. Li, Yoshiki Kubotani, Shigeo Morishima and Hubert P. H. Shum
Webpage Cite This Plain Text
Tanqiu Qiao, Qianhui Men, Frederick W. B. Li, Yoshiki Kubotani, Shigeo Morishima and Hubert P. H. Shum, "Geometric Features Informed Multi-Person Human-Object Interaction Recognition in Videos," in ECCV '22: Proceedings of the 2022 European Conference on Computer Vision, pp. 474-491, Tel Aviv, Israel, Springer, Oct 2022.
Bibtex
@inproceedings{qiao22geometric,
 author={Qiao, Tanqiu and Men, Qianhui and Li, Frederick W. B. and Kubotani, Yoshiki and Morishima, Shigeo and Shum, Hubert P. H.},
 booktitle={Proceedings of the 2022 European Conference on Computer Vision},
 series={ECCV '22},
 title={Geometric Features Informed Multi-Person Human-Object Interaction Recognition in Videos},
 year={2022},
 month={10},
 pages={474--491},
 numpages={18},
 doi={10.1007/978-3-031-19772-7_28},
 isbn={978-3-031-19772-7},
 publisher={Springer},
 location={Tel Aviv, Israel},
}
RIS
TY  - CONF
AU  - Qiao, Tanqiu
AU  - Men, Qianhui
AU  - Li, Frederick W. B.
AU  - Kubotani, Yoshiki
AU  - Morishima, Shigeo
AU  - Shum, Hubert P. H.
T2  - Proceedings of the 2022 European Conference on Computer Vision
TI  - Geometric Features Informed Multi-Person Human-Object Interaction Recognition in Videos
PY  - 2022
Y1  - 10 2022
SP  - 474
EP  - 491
DO  - 10.1007/978-3-031-19772-7_28
SN  - 978-3-031-19772-7
PB  - Springer
ER  - 
Paper Supplementary Material Dataset GitHub
360 Depth Estimation in the Wild - The Depth360 Dataset and the SegFuse Network
360 Depth Estimation in the Wild - The Depth360 Dataset and the SegFuse Network Core A* ConferenceCitation: 25#
Proceedings of the 2022 IEEE Conference on Virtual Reality and 3D User Interfaces (VR), 2022
Qi Feng, Hubert P. H. Shum and Shigeo Morishima
Webpage Cite This Plain Text
Qi Feng, Hubert P. H. Shum and Shigeo Morishima, "360 Depth Estimation in the Wild - The Depth360 Dataset and the SegFuse Network," in VR '22: Proceedings of the 2022 IEEE Conference on Virtual Reality and 3D User Interfaces, pp. 664-673, IEEE, Mar 2022.
Bibtex
@inproceedings{feng22depth,
 author={Feng, Qi and Shum, Hubert P. H. and Morishima, Shigeo},
 booktitle={Proceedings of the 2022 IEEE Conference on Virtual Reality and 3D User Interfaces},
 series={VR '22},
 title={360 Depth Estimation in the Wild - The Depth360 Dataset and the SegFuse Network},
 year={2022},
 month={3},
 pages={664--673},
 numpages={10},
 doi={10.1109/VR51125.2022.00087},
 publisher={IEEE},
}
RIS
TY  - CONF
AU  - Feng, Qi
AU  - Shum, Hubert P. H.
AU  - Morishima, Shigeo
T2  - Proceedings of the 2022 IEEE Conference on Virtual Reality and 3D User Interfaces
TI  - 360 Depth Estimation in the Wild - The Depth360 Dataset and the SegFuse Network
PY  - 2022
Y1  - 3 2022
SP  - 664
EP  - 673
DO  - 10.1109/VR51125.2022.00087
PB  - IEEE
ER  - 
Paper YouTube Part 1 YouTube Part 2
UAV-ReID: A Benchmark on Unmanned Aerial Vehicle Re-Identification in Video Imagery
UAV-ReID: A Benchmark on Unmanned Aerial Vehicle Re-Identification in Video Imagery
Proceedings of the 2022 International Conference on Computer Vision Theory and Applications (VISAPP), 2022
Daniel Organisciak, Matthew Poyser, Aishah Alsehaim, Shanfeng Hu, Brian K. S. Isaac-Medina, Toby P. Breckon and Hubert P. H. Shum
Webpage Cite This Plain Text
Daniel Organisciak, Matthew Poyser, Aishah Alsehaim, Shanfeng Hu, Brian K. S. Isaac-Medina, Toby P. Breckon and Hubert P. H. Shum, "UAV-ReID: A Benchmark on Unmanned Aerial Vehicle Re-Identification in Video Imagery," in VISAPP '22: Proceedings of the 2022 International Conference on Computer Vision Theory and Applications, pp. 136-146, SciTePress, Feb 2022.
Bibtex
@inproceedings{organisciak22uavreid,
 author={Organisciak, Daniel and Poyser, Matthew and Alsehaim, Aishah and Hu, Shanfeng and Isaac-Medina, Brian K. S. and Breckon, Toby P. and Shum, Hubert P. H.},
 booktitle={Proceedings of the 2022 International Conference on Computer Vision Theory and Applications},
 series={VISAPP '22},
 title={UAV-ReID: A Benchmark on Unmanned Aerial Vehicle Re-Identification in Video Imagery},
 year={2022},
 month={2},
 pages={136--146},
 numpages={11},
 doi={10.5220/0010836600003124},
 isbn={978-989-758-555-5},
 publisher={SciTePress},
}
RIS
TY  - CONF
AU  - Organisciak, Daniel
AU  - Poyser, Matthew
AU  - Alsehaim, Aishah
AU  - Hu, Shanfeng
AU  - Isaac-Medina, Brian K. S.
AU  - Breckon, Toby P.
AU  - Shum, Hubert P. H.
T2  - Proceedings of the 2022 International Conference on Computer Vision Theory and Applications
TI  - UAV-ReID: A Benchmark on Unmanned Aerial Vehicle Re-Identification in Video Imagery
PY  - 2022
Y1  - 2 2022
SP  - 136
EP  - 146
DO  - 10.5220/0010836600003124
SN  - 978-989-758-555-5
PB  - SciTePress
ER  - 
Paper GitHub
DurLAR: A High-fidelity 128-Channel LiDAR Dataset with Panoramic Ambientand Reflectivity Imagery for Multi-Modal Autonomous Driving Applications
DurLAR: A High-fidelity 128-Channel LiDAR Dataset with Panoramic Ambientand Reflectivity Imagery for Multi-Modal Autonomous Driving Applications H5-Index: 51#Citation: 20#
Proceedings of the 2021 International Conference on 3D Vision (3DV), 2021
Li Li, Khalid N. Ismail, Hubert P. H. Shum and Toby P. Breckon
Webpage Cite This Plain Text
Li Li, Khalid N. Ismail, Hubert P. H. Shum and Toby P. Breckon, "DurLAR: A High-fidelity 128-Channel LiDAR Dataset with Panoramic Ambientand Reflectivity Imagery for Multi-Modal Autonomous Driving Applications," in 3DV '21: Proceedings of the 2021 International Conference on 3D Vision, pp. 1227-1237, IEEE, Dec 2021.
Bibtex
@inproceedings{li21durlar,
 author={Li, Li and Ismail, Khalid N. and Shum, Hubert P. H. and Breckon, Toby P.},
 booktitle={Proceedings of the 2021 International Conference on 3D Vision},
 series={3DV '21},
 title={DurLAR: A High-fidelity 128-Channel LiDAR Dataset with Panoramic Ambientand Reflectivity Imagery for Multi-Modal Autonomous Driving Applications},
 year={2021},
 month={12},
 pages={1227--1237},
 numpages={11},
 doi={10.1109/3DV53792.2021.00130},
 publisher={IEEE},
}
RIS
TY  - CONF
AU  - Li, Li
AU  - Ismail, Khalid N.
AU  - Shum, Hubert P. H.
AU  - Breckon, Toby P.
T2  - Proceedings of the 2021 International Conference on 3D Vision
TI  - DurLAR: A High-fidelity 128-Channel LiDAR Dataset with Panoramic Ambientand Reflectivity Imagery for Multi-Modal Autonomous Driving Applications
PY  - 2021
Y1  - 12 2021
SP  - 1227
EP  - 1237
DO  - 10.1109/3DV53792.2021.00130
PB  - IEEE
ER  - 
Paper Dataset GitHub YouTube
Unmanned Aerial Vehicle Visual Detection and Tracking using Deep Neural Networks: A Performance Benchmark
Unmanned Aerial Vehicle Visual Detection and Tracking using Deep Neural Networks: A Performance Benchmark H5-Index: 80#Citation: 94#
Proceedings of the 2021 IEEE/CVF International Conference on Computer Vision Workshops (ICCVW), 2021
Brian K. S. Isaac-Medina, Matthew Poyser, Daniel Organisciak, Chris G. Willcocks, Toby P. Breckon and Hubert P. H. Shum
Webpage Cite This Plain Text
Brian K. S. Isaac-Medina, Matthew Poyser, Daniel Organisciak, Chris G. Willcocks, Toby P. Breckon and Hubert P. H. Shum, "Unmanned Aerial Vehicle Visual Detection and Tracking using Deep Neural Networks: A Performance Benchmark," in ICCVW '21: Proceedings of the 2021 IEEE/CVF International Conference on Computer Vision Workshops, pp. 1223-1232, IEEE/CVF, Oct 2021.
Bibtex
@inproceedings{issacmedina21unmanned,
 author={Isaac-Medina, Brian K. S. and Poyser, Matthew and Organisciak, Daniel and Willcocks, Chris G. and Breckon, Toby P. and Shum, Hubert P. H.},
 booktitle={Proceedings of the 2021 IEEE/CVF International Conference on Computer Vision Workshops},
 series={ICCVW '21},
 title={Unmanned Aerial Vehicle Visual Detection and Tracking using Deep Neural Networks: A Performance Benchmark},
 year={2021},
 month={10},
 pages={1223--1232},
 numpages={10},
 doi={10.1109/ICCVW54120.2021.00142},
 publisher={IEEE/CVF},
}
RIS
TY  - CONF
AU  - Isaac-Medina, Brian K. S.
AU  - Poyser, Matthew
AU  - Organisciak, Daniel
AU  - Willcocks, Chris G.
AU  - Breckon, Toby P.
AU  - Shum, Hubert P. H.
T2  - Proceedings of the 2021 IEEE/CVF International Conference on Computer Vision Workshops
TI  - Unmanned Aerial Vehicle Visual Detection and Tracking using Deep Neural Networks: A Performance Benchmark
PY  - 2021
Y1  - 10 2021
SP  - 1223
EP  - 1232
DO  - 10.1109/ICCVW54120.2021.00142
PB  - IEEE/CVF
ER  - 
Paper GitHub
Arbitrary View Action Recognition via Transfer Dictionary Learning on Synthetic Training Data
Arbitrary View Action Recognition via Transfer Dictionary Learning on Synthetic Training Data H5-Index: 122#Core A* ConferenceCitation: 16#
Proceedings of the 2016 IEEE International Conference on Robotics and Automation (ICRA), 2016
Jingtian Zhang, Lining Zhang, Hubert P. H. Shum and Ling Shao
Webpage Cite This Plain Text
Jingtian Zhang, Lining Zhang, Hubert P. H. Shum and Ling Shao, "Arbitrary View Action Recognition via Transfer Dictionary Learning on Synthetic Training Data," in ICRA '16: Proceedings of the 2016 IEEE International Conference on Robotics and Automation, pp. 1678-1684, Stockholm, Sweden, IEEE, May 2016.
Bibtex
@inproceedings{zhang16arbitrary,
 author={Zhang, Jingtian and Zhang, Lining and Shum, Hubert P. H. and Shao, Ling},
 booktitle={Proceedings of the 2016 IEEE International Conference on Robotics and Automation},
 series={ICRA '16},
 title={Arbitrary View Action Recognition via Transfer Dictionary Learning on Synthetic Training Data},
 year={2016},
 month={5},
 pages={1678--1684},
 numpages={8},
 doi={10.1109/ICRA.2016.7487309},
 publisher={IEEE},
 location={Stockholm, Sweden},
}
RIS
TY  - CONF
AU  - Zhang, Jingtian
AU  - Zhang, Lining
AU  - Shum, Hubert P. H.
AU  - Shao, Ling
T2  - Proceedings of the 2016 IEEE International Conference on Robotics and Automation
TI  - Arbitrary View Action Recognition via Transfer Dictionary Learning on Synthetic Training Data
PY  - 2016
Y1  - 5 2016
SP  - 1678
EP  - 1684
DO  - 10.1109/ICRA.2016.7487309
PB  - IEEE
ER  - 
Paper Dataset GitHub

† According to Journal Citation Reports 2023
‡ According to Core Ranking 2023
# According to Google Scholar 2025


HomeGoogle ScholarYouTubeLinkedInTwitter/XGitHubORCIDResearchGateEmail
 
Print