Research Publications - 3D Reconstruction

Select a Topic:​ All Motion Analysis Character Animation Interaction Modelling Video Analysis Action Recognition 3D Reconstruction Healthcare Diagnosis Crowd Modelling Environment Sensing Virtual Reality Responsible AI Surface Modelling Face Modelling Robotics Artwork Analysis Hands and Gestures Surveillance Cybersecurity Air and Space

Sort By:​YearTypeCitationImpact Factor


We reconstruct 3D information from 2D observations for medical image analysis, human-computer interaction, ergonomics and designs.

Insterested in our research? Consider joining us.

Impact Factor 7.0+

Real-Time Posture Reconstruction for Microsoft Kinect
Real-Time Posture Reconstruction for Microsoft Kinect REF 2014 Submitted OutputImpact Factor: 9.4Top 10% Journal in Computer Science, Artificial IntelligenceCitation: 183#
IEEE Transactions on Cybernetics (TCyb), 2013
Hubert P. H. Shum, Edmond S. L. Ho, Yang Jiang and Shu Takagi
Webpage Cite This Plain Text
Hubert P. H. Shum, Edmond S. L. Ho, Yang Jiang and Shu Takagi, "Real-Time Posture Reconstruction for Microsoft Kinect," IEEE Transactions on Cybernetics, vol. 43, no. 5, pp. 1357-1369, IEEE, 2013.
Bibtex
@article{shum13realtime,
 author={Shum, Hubert P. H. and Ho, Edmond S. L. and Jiang, Yang and Takagi, Shu},
 journal={IEEE Transactions on Cybernetics},
 title={Real-Time Posture Reconstruction for Microsoft Kinect},
 year={2013},
 volume={43},
 number={5},
 pages={1357--1369},
 numpages={13},
 doi={10.1109/TCYB.2013.2275945},
 issn={2168-2267},
 publisher={IEEE},
}
RIS
TY  - JOUR
AU  - Shum, Hubert P. H.
AU  - Ho, Edmond S. L.
AU  - Jiang, Yang
AU  - Takagi, Shu
T2  - IEEE Transactions on Cybernetics
TI  - Real-Time Posture Reconstruction for Microsoft Kinect
PY  - 2013
VL  - 43
IS  - 5
SP  - 1357
EP  - 1369
DO  - 10.1109/TCYB.2013.2275945
SN  - 2168-2267
PB  - IEEE
ER  - 
Paper YouTube

Impact Factor 3.0+

Kinect Posture Reconstruction Based on a Local Mixture of Gaussian Process Models
Kinect Posture Reconstruction Based on a Local Mixture of Gaussian Process Models REF 2021 Submitted OutputImpact Factor: 4.7Top 25% Journal in Computer Science, Software EngineeringCitation: 65#
IEEE Transactions on Visualization and Computer Graphics (TVCG), 2016
Zhiguang Liu, Liuyang Zhou, Howard Leung and Hubert P. H. Shum
Webpage Cite This Plain Text
Zhiguang Liu, Liuyang Zhou, Howard Leung and Hubert P. H. Shum, "Kinect Posture Reconstruction Based on a Local Mixture of Gaussian Process Models," IEEE Transactions on Visualization and Computer Graphics, vol. 22, no. 11, pp. 2437-2450, IEEE, Nov 2016.
Bibtex
@article{liu16kinect,
 author={Liu, Zhiguang and Zhou, Liuyang and Leung, Howard and Shum, Hubert P. H.},
 journal={IEEE Transactions on Visualization and Computer Graphics},
 title={Kinect Posture Reconstruction Based on a Local Mixture of Gaussian Process Models},
 year={2016},
 month={11},
 volume={22},
 number={11},
 pages={2437-2450},
 numpages={14},
 doi={10.1109/TVCG.2015.2510000},
 issn={1077-2626},
 publisher={IEEE},
}
RIS
TY  - JOUR
AU  - Liu, Zhiguang
AU  - Zhou, Liuyang
AU  - Leung, Howard
AU  - Shum, Hubert P. H.
T2  - IEEE Transactions on Visualization and Computer Graphics
TI  - Kinect Posture Reconstruction Based on a Local Mixture of Gaussian Process Models
PY  - 2016
Y1  - 11 2016
VL  - 22
IS  - 11
SP  - 2437-2450
EP  - 2437-2450
DO  - 10.1109/TVCG.2015.2510000
SN  - 1077-2626
PB  - IEEE
ER  - 
Paper YouTube
3D Car Shape Reconstruction from a Contour Sketch using GAN and Lazy Learning
3D Car Shape Reconstruction from a Contour Sketch using GAN and Lazy Learning Impact Factor: 3.0Citation: 28#
Visual Computer (VC), 2022
Naoki Nozawa, Hubert P. H. Shum, Qi Feng, Edmond S. L. Ho and Shigeo Morishima
Webpage Cite This Plain Text
Naoki Nozawa, Hubert P. H. Shum, Qi Feng, Edmond S. L. Ho and Shigeo Morishima, "3D Car Shape Reconstruction from a Contour Sketch using GAN and Lazy Learning," Visual Computer, vol. 38, no. 4, pp. 1317-1330, Springer, 2022.
Bibtex
@article{nozawa21car,
 author={Nozawa, Naoki and Shum, Hubert P. H. and Feng, Qi and Ho, Edmond S. L. and Morishima, Shigeo},
 journal={Visual Computer},
 title={3D Car Shape Reconstruction from a Contour Sketch using GAN and Lazy Learning},
 year={2022},
 volume={38},
 number={4},
 pages={1317--1330},
 numpages={14},
 doi={10.1007/s00371-020-02024-y},
 issn={1432-2315},
 publisher={Springer},
}
RIS
TY  - JOUR
AU  - Nozawa, Naoki
AU  - Shum, Hubert P. H.
AU  - Feng, Qi
AU  - Ho, Edmond S. L.
AU  - Morishima, Shigeo
T2  - Visual Computer
TI  - 3D Car Shape Reconstruction from a Contour Sketch using GAN and Lazy Learning
PY  - 2022
VL  - 38
IS  - 4
SP  - 1317
EP  - 1330
DO  - 10.1007/s00371-020-02024-y
SN  - 1432-2315
PB  - Springer
ER  - 
Paper YouTube
Filtered Pose Graph for Efficient Kinect Pose Reconstruction
Filtered Pose Graph for Efficient Kinect Pose Reconstruction Impact Factor: 3.0Citation: 47#
Multimedia Tools and Applications (MTAP), 2017
Pierre Plantard, Hubert P. H. Shum and Franck Multon
Webpage Cite This Plain Text
Pierre Plantard, Hubert P. H. Shum and Franck Multon, "Filtered Pose Graph for Efficient Kinect Pose Reconstruction," Multimedia Tools and Applications, vol. 76, no. 3, pp. 4291-4312, Springer-Verlag, 2017.
Bibtex
@article{plantard16filtered,
 author={Plantard, Pierre and Shum, Hubert P. H. and Multon, Franck},
 journal={Multimedia Tools and Applications},
 title={Filtered Pose Graph for Efficient Kinect Pose Reconstruction},
 year={2017},
 volume={76},
 number={3},
 pages={4291--4312},
 numpages={22},
 doi={10.1007/s11042-016-3546-4},
 issn={1573-7721},
 publisher={Springer-Verlag},
 Address={Berlin, Heidelberg},
}
RIS
TY  - JOUR
AU  - Plantard, Pierre
AU  - Shum, Hubert P. H.
AU  - Multon, Franck
T2  - Multimedia Tools and Applications
TI  - Filtered Pose Graph for Efficient Kinect Pose Reconstruction
PY  - 2017
VL  - 76
IS  - 3
SP  - 4291
EP  - 4312
DO  - 10.1007/s11042-016-3546-4
SN  - 1573-7721
PB  - Springer-Verlag
ER  - 
Paper YouTube

Impact Factor 0.0+

Neural-Code PIFu: High-Fidelity Single Image 3D Human Reconstruction via Neural Code Integration
Neural-Code PIFu: High-Fidelity Single Image 3D Human Reconstruction via Neural Code Integration H5-Index: 56#
Proceedings of the 2024 International Conference on Pattern Recognition (ICPR), 2024
Ruizhi Liu, Paolo Remagnino and Hubert P. H. Shum
Webpage Cite This Plain Text
Ruizhi Liu, Paolo Remagnino and Hubert P. H. Shum, "Neural-Code PIFu: High-Fidelity Single Image 3D Human Reconstruction via Neural Code Integration," in ICPR '24: Proceedings of the 2024 International Conference on Pattern Recognition, Kolkata, India, 2024.
Bibtex
@inproceedings{liu24neuralcode,
 author={Liu, Ruizhi and Remagnino, Paolo and Shum, Hubert P. H.},
 booktitle={Proceedings of the 2024 International Conference on Pattern Recognition},
 series={ICPR '24},
 title={Neural-Code PIFu: High-Fidelity Single Image 3D Human Reconstruction via Neural Code Integration},
 year={2024},
 location={Kolkata, India},
}
RIS
TY  - CONF
AU  - Liu, Ruizhi
AU  - Remagnino, Paolo
AU  - Shum, Hubert P. H.
T2  - Proceedings of the 2024 International Conference on Pattern Recognition
TI  - Neural-Code PIFu: High-Fidelity Single Image 3D Human Reconstruction via Neural Code Integration
PY  - 2024
ER  - 
Paper
Repeat and Concatenate: 2D to 3D Image Translation with 3D to 3D Generative Modeling
Repeat and Concatenate: 2D to 3D Image Translation with 3D to 3D Generative Modeling  Best Paper AwardH5-Index: 115#
Proceedings of the 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), 2024
Abril Corona-Figueroa, Hubert P. H. Shum and Chris G. Willcocks
Webpage Cite This Plain Text
Abril Corona-Figueroa, Hubert P. H. Shum and Chris G. Willcocks, "Repeat and Concatenate: 2D to 3D Image Translation with 3D to 3D Generative Modeling," in CVPRW '24: Proceedings of the 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 2315-2324, Seattle, USA, IEEE/CVF, 2024.
Bibtex
@inproceedings{coronafigueroaa24repeat,
 author={Corona-Figueroa, Abril and Shum, Hubert P. H. and Willcocks, Chris G.},
 booktitle={Proceedings of the 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops},
 series={CVPRW '24},
 title={Repeat and Concatenate: 2D to 3D Image Translation with 3D to 3D Generative Modeling},
 year={2024},
 pages={2315--2324},
 numpages={10},
 doi={10.1109/CVPRW63382.2024.00237},
 publisher={IEEE/CVF},
 location={Seattle, USA},
}
RIS
TY  - CONF
AU  - Corona-Figueroa, Abril
AU  - Shum, Hubert P. H.
AU  - Willcocks, Chris G.
T2  - Proceedings of the 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops
TI  - Repeat and Concatenate: 2D to 3D Image Translation with 3D to 3D Generative Modeling
PY  - 2024
SP  - 2315
EP  - 2324
DO  - 10.1109/CVPRW63382.2024.00237
PB  - IEEE/CVF
ER  - 
Paper
Unaligned 2D to 3D Translation with Conditional Vector-Quantized Code Diffusion using Transformers
Unaligned 2D to 3D Translation with Conditional Vector-Quantized Code Diffusion using Transformers H5-Index: 291#Core A* Conference
Proceedings of the 2023 IEEE/CVF International Conference on Computer Vision (ICCV), 2023
Abril Corona-Figueroa, Sam Bond-Taylor, Neelanjan Bhowmik, Yona Falinie A. Gaus, Toby P. Breckon, Hubert P. H. Shum and Chris G. Willcocks
Webpage Cite This Plain Text
Abril Corona-Figueroa, Sam Bond-Taylor, Neelanjan Bhowmik, Yona Falinie A. Gaus, Toby P. Breckon, Hubert P. H. Shum and Chris G. Willcocks, "Unaligned 2D to 3D Translation with Conditional Vector-Quantized Code Diffusion using Transformers," in ICCV '23: Proceedings of the 2023 IEEE/CVF International Conference on Computer Vision, pp. 14539-14548, Paris, France, IEEE/CVF, Oct 2023.
Bibtex
@inproceedings{coronafigueroaa23unaligned,
 author={Corona-Figueroa, Abril and Bond-Taylor, Sam and Bhowmik, Neelanjan and Gaus, Yona Falinie A. and Breckon, Toby P. and Shum, Hubert P. H. and Willcocks, Chris G.},
 booktitle={Proceedings of the 2023 IEEE/CVF International Conference on Computer Vision},
 series={ICCV '23},
 title={Unaligned 2D to 3D Translation with Conditional Vector-Quantized Code Diffusion using Transformers},
 year={2023},
 month={10},
 pages={14539--14548},
 numpages={10},
 doi={10.1109/ICCV51070.2023.01341},
 publisher={IEEE/CVF},
 location={Paris, France},
}
RIS
TY  - CONF
AU  - Corona-Figueroa, Abril
AU  - Bond-Taylor, Sam
AU  - Bhowmik, Neelanjan
AU  - Gaus, Yona Falinie A.
AU  - Breckon, Toby P.
AU  - Shum, Hubert P. H.
AU  - Willcocks, Chris G.
T2  - Proceedings of the 2023 IEEE/CVF International Conference on Computer Vision
TI  - Unaligned 2D to 3D Translation with Conditional Vector-Quantized Code Diffusion using Transformers
PY  - 2023
Y1  - 10 2023
SP  - 14539
EP  - 14548
DO  - 10.1109/ICCV51070.2023.01341
PB  - IEEE/CVF
ER  - 
Paper Supplementary Material YouTube
360 Depth Estimation in the Wild - The Depth360 Dataset and the SegFuse Network
360 Depth Estimation in the Wild - The Depth360 Dataset and the SegFuse Network Core A* ConferenceCitation: 25#
Proceedings of the 2022 IEEE Conference on Virtual Reality and 3D User Interfaces (VR), 2022
Qi Feng, Hubert P. H. Shum and Shigeo Morishima
Webpage Cite This Plain Text
Qi Feng, Hubert P. H. Shum and Shigeo Morishima, "360 Depth Estimation in the Wild - The Depth360 Dataset and the SegFuse Network," in VR '22: Proceedings of the 2022 IEEE Conference on Virtual Reality and 3D User Interfaces, pp. 664-673, IEEE, Mar 2022.
Bibtex
@inproceedings{feng22depth,
 author={Feng, Qi and Shum, Hubert P. H. and Morishima, Shigeo},
 booktitle={Proceedings of the 2022 IEEE Conference on Virtual Reality and 3D User Interfaces},
 series={VR '22},
 title={360 Depth Estimation in the Wild - The Depth360 Dataset and the SegFuse Network},
 year={2022},
 month={3},
 pages={664--673},
 numpages={10},
 doi={10.1109/VR51125.2022.00087},
 publisher={IEEE},
}
RIS
TY  - CONF
AU  - Feng, Qi
AU  - Shum, Hubert P. H.
AU  - Morishima, Shigeo
T2  - Proceedings of the 2022 IEEE Conference on Virtual Reality and 3D User Interfaces
TI  - 360 Depth Estimation in the Wild - The Depth360 Dataset and the SegFuse Network
PY  - 2022
Y1  - 3 2022
SP  - 664
EP  - 673
DO  - 10.1109/VR51125.2022.00087
PB  - IEEE
ER  - 
Paper YouTube Part 1 YouTube Part 2
3D Reconstruction of Sculptures from Single Images via Unsupervised Domain Adaptation on Implicit Models
3D Reconstruction of Sculptures from Single Images via Unsupervised Domain Adaptation on Implicit Models Core A Conference
Proceedings of the 2022 ACM Symposium on Virtual Reality Software and Technology (VRST), 2022
Ziyi Chang, George Alex Koulieris and Hubert P. H. Shum
Webpage Cite This Plain Text
Ziyi Chang, George Alex Koulieris and Hubert P. H. Shum, "3D Reconstruction of Sculptures from Single Images via Unsupervised Domain Adaptation on Implicit Models," in VRST '22: Proceedings of the 2022 ACM Symposium on Virtual Reality Software and Technology, pp. 1-10, Tsukuba, Japan, ACM, Nov 2022.
Bibtex
@inproceedings{chang22reconstruction,
 author={Chang, Ziyi and Koulieris, George Alex and Shum, Hubert P. H.},
 booktitle={Proceedings of the 2022 ACM Symposium on Virtual Reality Software and Technology},
 series={VRST '22},
 title={3D Reconstruction of Sculptures from Single Images via Unsupervised Domain Adaptation on Implicit Models},
 year={2022},
 month={11},
 pages={1--10},
 numpages={10},
 doi={10.1145/3562939.3565632},
 isbn={9.78E+12},
 publisher={ACM},
 Address={New York, NY, USA},
 location={Tsukuba, Japan},
}
RIS
TY  - CONF
AU  - Chang, Ziyi
AU  - Koulieris, George Alex
AU  - Shum, Hubert P. H.
T2  - Proceedings of the 2022 ACM Symposium on Virtual Reality Software and Technology
TI  - 3D Reconstruction of Sculptures from Single Images via Unsupervised Domain Adaptation on Implicit Models
PY  - 2022
Y1  - 11 2022
SP  - 1
EP  - 10
DO  - 10.1145/3562939.3565632
SN  - 9.78E+12
PB  - ACM
ER  - 
Paper GitHub YouTube
MedNeRF: Medical Neural Radiance Fields for Reconstructing 3D-Aware CT-projections from a Single X-ray
MedNeRF: Medical Neural Radiance Fields for Reconstructing 3D-Aware CT-projections from a Single X-ray Citation: 111#
Proceedings of the 2022 International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), 2022
Abril Corona-Figueroa, Jonathan Frawley, Sam Bond-Taylor, Sarath Bethapudi, Hubert P. H. Shum and Chris G. Willcocks
Webpage Cite This Plain Text
Abril Corona-Figueroa, Jonathan Frawley, Sam Bond-Taylor, Sarath Bethapudi, Hubert P. H. Shum and Chris G. Willcocks, "MedNeRF: Medical Neural Radiance Fields for Reconstructing 3D-Aware CT-projections from a Single X-ray," in EMBC '22: Proceedings of the 2022 International Conference of the IEEE Engineering in Medicine and Biology Society, pp. 3843-3848, Glasgow, UK, IEEE, Jul 2022.
Bibtex
@inproceedings{coronafigueroaa22mednerf,
 author={Corona-Figueroa, Abril and Frawley, Jonathan and Bond-Taylor, Sam and Bethapudi, Sarath and Shum, Hubert P. H. and Willcocks, Chris G.},
 booktitle={Proceedings of the 2022 International Conference of the IEEE Engineering in Medicine and Biology Society},
 series={EMBC '22},
 title={MedNeRF: Medical Neural Radiance Fields for Reconstructing 3D-Aware CT-projections from a Single X-ray},
 year={2022},
 month={7},
 pages={3843--3848},
 numpages={6},
 doi={10.1109/EMBC48229.2022.9871757},
 publisher={IEEE},
 location={Glasgow, UK},
}
RIS
TY  - CONF
AU  - Corona-Figueroa, Abril
AU  - Frawley, Jonathan
AU  - Bond-Taylor, Sam
AU  - Bethapudi, Sarath
AU  - Shum, Hubert P. H.
AU  - Willcocks, Chris G.
T2  - Proceedings of the 2022 International Conference of the IEEE Engineering in Medicine and Biology Society
TI  - MedNeRF: Medical Neural Radiance Fields for Reconstructing 3D-Aware CT-projections from a Single X-ray
PY  - 2022
Y1  - 7 2022
SP  - 3843
EP  - 3848
DO  - 10.1109/EMBC48229.2022.9871757
PB  - IEEE
ER  - 
Paper GitHub
DurLAR: A High-fidelity 128-Channel LiDAR Dataset with Panoramic Ambientand Reflectivity Imagery for Multi-Modal Autonomous Driving Applications
DurLAR: A High-fidelity 128-Channel LiDAR Dataset with Panoramic Ambientand Reflectivity Imagery for Multi-Modal Autonomous Driving Applications H5-Index: 51#Citation: 20#
Proceedings of the 2021 International Conference on 3D Vision (3DV), 2021
Li Li, Khalid N. Ismail, Hubert P. H. Shum and Toby P. Breckon
Webpage Cite This Plain Text
Li Li, Khalid N. Ismail, Hubert P. H. Shum and Toby P. Breckon, "DurLAR: A High-fidelity 128-Channel LiDAR Dataset with Panoramic Ambientand Reflectivity Imagery for Multi-Modal Autonomous Driving Applications," in 3DV '21: Proceedings of the 2021 International Conference on 3D Vision, pp. 1227-1237, IEEE, Dec 2021.
Bibtex
@inproceedings{li21durlar,
 author={Li, Li and Ismail, Khalid N. and Shum, Hubert P. H. and Breckon, Toby P.},
 booktitle={Proceedings of the 2021 International Conference on 3D Vision},
 series={3DV '21},
 title={DurLAR: A High-fidelity 128-Channel LiDAR Dataset with Panoramic Ambientand Reflectivity Imagery for Multi-Modal Autonomous Driving Applications},
 year={2021},
 month={12},
 pages={1227--1237},
 numpages={11},
 doi={10.1109/3DV53792.2021.00130},
 publisher={IEEE},
}
RIS
TY  - CONF
AU  - Li, Li
AU  - Ismail, Khalid N.
AU  - Shum, Hubert P. H.
AU  - Breckon, Toby P.
T2  - Proceedings of the 2021 International Conference on 3D Vision
TI  - DurLAR: A High-fidelity 128-Channel LiDAR Dataset with Panoramic Ambientand Reflectivity Imagery for Multi-Modal Autonomous Driving Applications
PY  - 2021
Y1  - 12 2021
SP  - 1227
EP  - 1237
DO  - 10.1109/3DV53792.2021.00130
PB  - IEEE
ER  - 
Paper Dataset GitHub YouTube
Bi-Projection Based Foreground-Aware Omnidirectional Depth Prediction
Bi-Projection Based Foreground-Aware Omnidirectional Depth Prediction
Proceedings of the 2021 Visual Computing (VC), 2021
Qi Feng, Hubert P. H. Shum and Shigeo Morishima
Webpage Cite This Plain Text
Qi Feng, Hubert P. H. Shum and Shigeo Morishima, "Bi-Projection Based Foreground-Aware Omnidirectional Depth Prediction," in VC '21: Proceedings of the 2021 Visual Computing, Sep 2021.
Bibtex
@inproceedings{feng21biprojection,
 author={Feng, Qi and Shum, Hubert P. H. and Morishima, Shigeo},
 booktitle={Proceedings of the 2021 Visual Computing},
 series={VC '21},
 title={Bi-Projection Based Foreground-Aware Omnidirectional Depth Prediction},
 year={2021},
 month={9},
 numpages={6},
}
RIS
TY  - CONF
AU  - Feng, Qi
AU  - Shum, Hubert P. H.
AU  - Morishima, Shigeo
T2  - Proceedings of the 2021 Visual Computing
TI  - Bi-Projection Based Foreground-Aware Omnidirectional Depth Prediction
PY  - 2021
Y1  - 9 2021
ER  - 
Paper
Foreground-Aware Dense Depth Estimation for 360 Images
Foreground-Aware Dense Depth Estimation for 360 Images
Journal of WSCG - Proceedings of the 2020 International Conferences in Central Europe on Computer Graphics, Visualization and Computer Vision (WSCG), 2020
Qi Feng, Hubert P. H. Shum and Shigeo Morishima
Webpage Cite This Plain Text
Qi Feng, Hubert P. H. Shum and Shigeo Morishima, "Foreground-Aware Dense Depth Estimation for 360 Images," Journal of WSCG, vol. 28, no. 1--2, pp. 79-88, Plzen, Czech Republic, May 2020.
Bibtex
@article{feng20foreground,
 author={Feng, Qi and Shum, Hubert P. H. and Morishima, Shigeo},
 journal={Journal of WSCG},
 title={Foreground-Aware Dense Depth Estimation for 360 Images},
 year={2020},
 month={5},
 volume={28},
 number={1--2},
 pages={79--88},
 numpages={10},
 doi={10.24132/JWSCG.2020.28.10},
 issn={1213-6972},
 location={Plzen, Czech Republic},
}
RIS
TY  - JOUR
AU  - Feng, Qi
AU  - Shum, Hubert P. H.
AU  - Morishima, Shigeo
T2  - Journal of WSCG
TI  - Foreground-Aware Dense Depth Estimation for 360 Images
PY  - 2020
Y1  - 5 2020
VL  - 28
IS  - 1--2
SP  - 79
EP  - 88
DO  - 10.24132/JWSCG.2020.28.10
SN  - 1213-6972
ER  - 
Paper Supplementary Material YouTube
Single Sketch Image Based 3D Car Shape Reconstruction with Deep Learning and Lazy Learning
Single Sketch Image Based 3D Car Shape Reconstruction with Deep Learning and Lazy Learning  Best Student Paper AwardCitation: 13#
Proceedings of the 2020 International Conference on Computer Graphics Theory and Applications (GRAPP), 2020
Naoki Nozawa, Hubert P. H. Shum, Edmond S. L. Ho and Shigeo Morishima
Webpage Cite This Plain Text
Naoki Nozawa, Hubert P. H. Shum, Edmond S. L. Ho and Shigeo Morishima, "Single Sketch Image Based 3D Car Shape Reconstruction with Deep Learning and Lazy Learning," in GRAPP '20: Proceedings of the 2020 International Conference on Computer Graphics Theory and Applications, pp. 179-190, Valletta, Malta, SciTePress, Feb 2020.
Bibtex
@inproceedings{nozawa20single,
 author={Nozawa, Naoki and Shum, Hubert P. H. and Ho, Edmond S. L. and Morishima, Shigeo},
 booktitle={Proceedings of the 2020 International Conference on Computer Graphics Theory and Applications},
 series={GRAPP '20},
 title={Single Sketch Image Based 3D Car Shape Reconstruction with Deep Learning and Lazy Learning},
 year={2020},
 month={2},
 pages={179--190},
 numpages={12},
 doi={10.5220/0009157001790190},
 issn={2184-4321},
 isbn={978-989-758-402-2},
 publisher={SciTePress},
 location={Valletta, Malta},
}
RIS
TY  - CONF
AU  - Nozawa, Naoki
AU  - Shum, Hubert P. H.
AU  - Ho, Edmond S. L.
AU  - Morishima, Shigeo
T2  - Proceedings of the 2020 International Conference on Computer Graphics Theory and Applications
TI  - Single Sketch Image Based 3D Car Shape Reconstruction with Deep Learning and Lazy Learning
PY  - 2020
Y1  - 2 2020
SP  - 179
EP  - 190
DO  - 10.5220/0009157001790190
SN  - 2184-4321
PB  - SciTePress
ER  - 
Paper YouTube
3D Car Shape Reconstruction from a Single Sketch Image
3D Car Shape Reconstruction from a Single Sketch Image  Best Poster Award
Proceedings of the 2019 ACM SIGGRAPH Conference on Motion, Interaction and Games (MIG) Posters, 2019
Naoki Nozawa, Hubert P. H. Shum, Edmond S. L. Ho and Shigeo Morishima
Webpage Cite This Plain Text
Naoki Nozawa, Hubert P. H. Shum, Edmond S. L. Ho and Shigeo Morishima, "3D Car Shape Reconstruction from a Single Sketch Image," in MIG '19: Proceedings of the 2019 ACM SIGGRAPH Conference on Motion, Interaction and Games, pp. 37:1-37:2, Newcastle upon Tyne, UK, ACM, Oct 2019.
Bibtex
@inproceedings{nozawa193dcar,
 author={Nozawa, Naoki and Shum, Hubert P. H. and Ho, Edmond S. L. and Morishima, Shigeo},
 booktitle={Proceedings of the 2019 ACM SIGGRAPH Conference on Motion, Interaction and Games},
 series={MIG '19},
 title={3D Car Shape Reconstruction from a Single Sketch Image},
 year={2019},
 month={10},
 pages={37:1--37:2},
 numpages={2},
 doi={10.1145/3359566.3364693},
 isbn={978-1-4503-6994-7},
 publisher={ACM},
 Address={New York, NY, USA},
 location={Newcastle upon Tyne, UK},
}
RIS
TY  - CONF
AU  - Nozawa, Naoki
AU  - Shum, Hubert P. H.
AU  - Ho, Edmond S. L.
AU  - Morishima, Shigeo
T2  - Proceedings of the 2019 ACM SIGGRAPH Conference on Motion, Interaction and Games
TI  - 3D Car Shape Reconstruction from a Single Sketch Image
PY  - 2019
Y1  - 10 2019
SP  - 37:1
EP  - 37:2
DO  - 10.1145/3359566.3364693
SN  - 978-1-4503-6994-7
PB  - ACM
ER  - 
Paper
Prior-Less 3D Human Shape Reconstruction with an Earth Mover's Distance Informed CNN
Prior-Less 3D Human Shape Reconstruction with an Earth Mover's Distance Informed CNN
Proceedings of the 2019 ACM SIGGRAPH Conference on Motion, Interaction and Games (MIG) Posters, 2019
Jingtian Zhang, Hubert P. H. Shum, Kevin D. McCay and Edmond S. L. Ho
Webpage Cite This Plain Text
Jingtian Zhang, Hubert P. H. Shum, Kevin D. McCay and Edmond S. L. Ho, "Prior-Less 3D Human Shape Reconstruction with an Earth Mover's Distance Informed CNN," in MIG '19: Proceedings of the 2019 ACM SIGGRAPH Conference on Motion, Interaction and Games, pp. 44:1-44:2, Newcastle upon Tyne, UK, ACM, Oct 2019.
Bibtex
@inproceedings{zhang19priorless,
 author={Zhang, Jingtian and Shum, Hubert P. H. and McCay, Kevin D. and Ho, Edmond S. L.},
 booktitle={Proceedings of the 2019 ACM SIGGRAPH Conference on Motion, Interaction and Games},
 series={MIG '19},
 title={Prior-Less 3D Human Shape Reconstruction with an Earth Mover's Distance Informed CNN},
 year={2019},
 month={10},
 pages={44:1--44:2},
 numpages={2},
 doi={10.1145/3359566.3364694},
 isbn={978-1-4503-6994-7},
 publisher={ACM},
 Address={New York, NY, USA},
 location={Newcastle upon Tyne, UK},
}
RIS
TY  - CONF
AU  - Zhang, Jingtian
AU  - Shum, Hubert P. H.
AU  - McCay, Kevin D.
AU  - Ho, Edmond S. L.
T2  - Proceedings of the 2019 ACM SIGGRAPH Conference on Motion, Interaction and Games
TI  - Prior-Less 3D Human Shape Reconstruction with an Earth Mover's Distance Informed CNN
PY  - 2019
Y1  - 10 2019
SP  - 44:1
EP  - 44:2
DO  - 10.1145/3359566.3364694
SN  - 978-1-4503-6994-7
PB  - ACM
ER  - 
Paper
Posture Reconstruction Using Kinect with a Probabilistic Model
Posture Reconstruction Using Kinect with a Probabilistic Model Core A ConferenceCitation: 33#
Proceedings of the 2014 ACM Symposium on Virtual Reality Software and Technology (VRST), 2014
Liuyang Zhou, Zhiguang Liu, Howard Leung and Hubert P. H. Shum
Webpage Cite This Plain Text
Liuyang Zhou, Zhiguang Liu, Howard Leung and Hubert P. H. Shum, "Posture Reconstruction Using Kinect with a Probabilistic Model," in VRST '14: Proceedings of the 2014 ACM Symposium on Virtual Reality Software and Technology, pp. 117-125, Edinburgh, UK, ACM, Oct 2014.
Bibtex
@inproceedings{zhou14posture,
 author={Zhou, Liuyang and Liu, Zhiguang and Leung, Howard and Shum, Hubert P. H.},
 booktitle={Proceedings of the 2014 ACM Symposium on Virtual Reality Software and Technology},
 series={VRST '14},
 title={Posture Reconstruction Using Kinect with a Probabilistic Model},
 year={2014},
 month={10},
 pages={117--125},
 numpages={9},
 doi={10.1145/2671015.2671021},
 publisher={ACM},
 Address={New York, NY, USA},
 location={Edinburgh, UK},
}
RIS
TY  - CONF
AU  - Zhou, Liuyang
AU  - Liu, Zhiguang
AU  - Leung, Howard
AU  - Shum, Hubert P. H.
T2  - Proceedings of the 2014 ACM Symposium on Virtual Reality Software and Technology
TI  - Posture Reconstruction Using Kinect with a Probabilistic Model
PY  - 2014
Y1  - 10 2014
SP  - 117
EP  - 125
DO  - 10.1145/2671015.2671021
PB  - ACM
ER  - 
Paper YouTube
Serious Games with Human-Object Interactions using RGB-D Camera
Serious Games with Human-Object Interactions using RGB-D Camera
Proceedings of the 2013 ACM International Conference on Motion in Games (MIG) Posters, 2013
Hubert P. H. Shum
Webpage Cite This Plain Text
Hubert P. H. Shum, "Serious Games with Human-Object Interactions using RGB-D Camera," in MIG '13: Proceedings of the 2013 ACM International Conference on Motion in Games, Dublin, Ireland, Springer-Verlag, Nov 2013.
Bibtex
@inproceedings{shum13serious,
 author={Shum, Hubert P. H.},
 booktitle={Proceedings of the 2013 ACM International Conference on Motion in Games},
 series={MIG '13},
 title={Serious Games with Human-Object Interactions using RGB-D Camera},
 year={2013},
 month={11},
 numpages={1},
 publisher={Springer-Verlag},
 Address={Berlin, Heidelberg},
 location={Dublin, Ireland},
}
RIS
TY  - CONF
AU  - Shum, Hubert P. H.
T2  - Proceedings of the 2013 ACM International Conference on Motion in Games
TI  - Serious Games with Human-Object Interactions using RGB-D Camera
PY  - 2013
Y1  - 11 2013
PB  - Springer-Verlag
ER  - 
Paper
Environment Capturing with Microsoft Kinect
Environment Capturing with Microsoft Kinect
Proceedings of the 2012 International Conference on Software, Knowledge, Information Management and Applications (SKIMA), 2012
Kevin Mackay, Hubert P. H. Shum and Taku Komura
Webpage Cite This Plain Text
Kevin Mackay, Hubert P. H. Shum and Taku Komura, "Environment Capturing with Microsoft Kinect," in SKIMA '12: Proceedings of the 2012 International Conference on Software, Knowledge, Information Management and Applications, Dhaka, Bangladesh, Dec 2012.
Bibtex
@inproceedings{mackay12environment,
 author={Mackay, Kevin and Shum, Hubert P. H. and Komura, Taku},
 booktitle={Proceedings of the 2012 International Conference on Software, Knowledge, Information Management and Applications},
 series={SKIMA '12},
 title={Environment Capturing with Microsoft Kinect},
 year={2012},
 month={12},
 numpages={6},
 location={Dhaka, Bangladesh},
}
RIS
TY  - CONF
AU  - Mackay, Kevin
AU  - Shum, Hubert P. H.
AU  - Komura, Taku
T2  - Proceedings of the 2012 International Conference on Software, Knowledge, Information Management and Applications
TI  - Environment Capturing with Microsoft Kinect
PY  - 2012
Y1  - 12 2012
ER  - 
Paper

† According to Journal Citation Reports 2023
‡ According to Core Ranking 2023
# According to Google Scholar 2025


HomeGoogle ScholarYouTubeLinkedInTwitter/XGitHubORCIDResearchGateEmail
 
Print