[Recruitment]
Associate/Assistant Professor in Artificial Intelligence for Space-Enabled Technologies, Durham University

We are looking for applicants in Artificial Intelligence, Computer Vision, Edge Computing, Digital Twins, Human Computer Interaction, User Modelling, Robotics or Resilient Computing with potentials/achievements in informing space applications.

The post hoder will enjoy 1) a permanent (equivalent to US tenured) position at a top 100 university, 2) significantly reduced teaching, 3) a fully-funded PhD, 4) travel budget, 5) chance for a 2-year fully-funded Post-Doc.

Unaligned 2D to 3D Translation with Conditional Vector-Quantized Code Diffusion using Transformers

Abril Corona-Figueroa, Sam Bond-Taylor, Neelanjan Bhowmik, Yona Falinie A. Gaus, Toby P. Breckon, Hubert P. H. Shum and Chris G. Willcocks
Proceedings of the 2023 IEEE/CVF International Conference on Computer Vision (ICCV), 2023

Core A* Conference H5-Index: 228# Core A* Conference

Unaligned 2D to 3D Translation with Conditional Vector-Quantized Code Diffusion using Transformers
‡ According to Core Ranking 2023"

Abstract

Generating 3D images of complex objects conditionally from a few 2D views is a difficult synthesis problem, compounded by issues such as domain gap and geometric misalignment. For instance, a unified framework such as Generative Adversarial Networks cannot achieve this unless they explicitly define both a domain-invariant and geometric-invariant joint latent distribution, whereas Neural Radiance Fields are generally unable to handle both issues as they optimize at the pixel level. By contrast, we propose a simple and novel 2D to 3D synthesis approach based on conditional diffusion with vector-quantized codes. Operating in an information-rich code space enables highresolution 3D synthesis via full-coverage attention across the views. Specifically, we generate the 3D codes, e.g. for CT images, conditional on previously generated 3D codes and the entire codebook of two 2D views (e.g. 2D X-rays). Qualitative and quantitative results demonstrate state-ofthe- art performance over specialized methods across varied evaluation criteria, including fidelity metrics such as density and coverage and distortion metrics for two datasets of complex volumetric imagery found in real-world scenarios.

Downloads

YouTube

Citations

BibTeX

@inproceedings{coronafigueroaa23unaligned,
 author={Corona-Figueroa, Abril and Bond-Taylor, Sam and Bhowmik, Neelanjan and Gaus, Yona Falinie A. and Breckon, Toby P. and Shum, Hubert P. H. and Willcocks, Chris G.},
 booktitle={Proceedings of the 2023 IEEE/CVF International Conference on Computer Vision},
 series={ICCV '23},
 title={Unaligned 2D to 3D Translation with Conditional Vector-Quantized Code Diffusion using Transformers},
 year={2023},
 month={10},
 pages={14539--14548},
 numpages={10},
 doi={10.1109/ICCV51070.2023.01341},
 publisher={IEEE/CVF},
 location={Paris, France},
}

RIS

TY  - CONF
AU  - Corona-Figueroa, Abril
AU  - Bond-Taylor, Sam
AU  - Bhowmik, Neelanjan
AU  - Gaus, Yona Falinie A.
AU  - Breckon, Toby P.
AU  - Shum, Hubert P. H.
AU  - Willcocks, Chris G.
T2  - Proceedings of the 2023 IEEE/CVF International Conference on Computer Vision
TI  - Unaligned 2D to 3D Translation with Conditional Vector-Quantized Code Diffusion using Transformers
PY  - 2023
Y1  - 10 2023
SP  - 14539
EP  - 14548
DO  - 10.1109/ICCV51070.2023.01341
PB  - IEEE/CVF
ER  - 

Plain Text

Abril Corona-Figueroa, Sam Bond-Taylor, Neelanjan Bhowmik, Yona Falinie A. Gaus, Toby P. Breckon, Hubert P. H. Shum and Chris G. Willcocks, "Unaligned 2D to 3D Translation with Conditional Vector-Quantized Code Diffusion using Transformers," in ICCV '23: Proceedings of the 2023 IEEE/CVF International Conference on Computer Vision, pp. 14539-14548, Paris, France, IEEE/CVF, Oct 2023.

Supporting Grants

The Engineering and Physical Sciences Research Council
Northern Health Futures Hub (NortHFutures)
EPSRC Digital Health Hub Pilot Scheme (Ref: EP/X031012/1): £4.17 million, Co-Investigator (PI: Prof. Abigail Durrant)
Received from The Engineering and Physical Sciences Research Council, UK, 2023-2026
Project Page

Similar Research

Abril Corona-Figueroa, Jonathan Frawley, Sam Bond-Taylor, Sarath Bethapudi, Hubert P. H. Shum and Chris G. Willcocks, "MedNeRF: Medical Neural Radiance Fields for Reconstructing 3D-Aware CT-projections from a Single X-ray", Proceedings of the 2022 International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), 2022
Qi Feng, Hubert P. H. Shum and Shigeo Morishima, "Foreground-Aware Dense Depth Estimation for 360 Images", Journal of WSCG - Proceedings of the 2020 International Conferences in Central Europe on Computer Graphics, Visualization and Computer Vision (WSCG), 2020
Qi Feng, Hubert P. H. Shum and Shigeo Morishima, "360 Depth Estimation in the Wild - The Depth360 Dataset and the SegFuse Network", Proceedings of the 2022 IEEE Conference on Virtual Reality and 3D User Interfaces (VR), 2022

 

 

Last updated on 17 February 2024
RSS Feed