360 Depth Estimation in the Wild - The Depth360 Dataset and the SegFuse Network

Qi Feng, Hubert P. H. Shum and Shigeo Morishima
Proceedings of the 2022 IEEE Conference on Virtual Reality and 3D User Interfaces (VR), 2022

Core A* Conference Core A* Conference

360 Depth Estimation in the Wild - The Depth360 Dataset and the SegFuse Network
‡ According to Core Ranking 2023"

Abstract

Single-view depth estimation from omnidirectional images has gained popularity with its wide range of applications such as autonomous driving and scene reconstruction. Although data-driven learning-based methods demonstrate significant potential in this field, scarce training data and ineffective 360 estimation algorithms are still two key limitations hindering accurate estimation across diverse domains. In this work, we first establish a large-scale dataset with varied settings called Depth360 to tackle the training data problem. This is achieved by exploring the use of a plenteous source of data, 360 videos from the internet, using a test-time training method that leverages unique information in each omnidirectional sequence. With novel geometric and temporal constraints, our method generates consistent and convincing depth samples to facilitate single-view estimation. We then propose an end-to-end two-branch multi-task learning network, SegFuse, that mimics the human eye to effectively learn from the dataset and estimate high-quality depth maps from diverse monocular RGB images. With a peripheral branch that uses equirectangular projection for depth estimation and a foveal branch that uses cubemap projection for semantic segmentation, our method predicts consistent global depth while maintaining sharp details at local regions. Experimental results show favorable performance against the state-of-the-art methods.

Downloads

YouTube



Citations

BibTeX

@inproceedings{feng22depth,
 author={Feng, Qi and Shum, Hubert P. H. and Morishima, Shigeo},
 booktitle={Proceedings of the 2022 IEEE Conference on Virtual Reality and 3D User Interfaces},
 series={VR '22},
 title={360 Depth Estimation in the Wild - The Depth360 Dataset and the SegFuse Network},
 year={2022},
 month={3},
 pages={664--673},
 numpages={10},
 doi={10.1109/VR51125.2022.00087},
 publisher={IEEE},
}

RIS

TY  - CONF
AU  - Feng, Qi
AU  - Shum, Hubert P. H.
AU  - Morishima, Shigeo
T2  - Proceedings of the 2022 IEEE Conference on Virtual Reality and 3D User Interfaces
TI  - 360 Depth Estimation in the Wild - The Depth360 Dataset and the SegFuse Network
PY  - 2022
Y1  - 3 2022
SP  - 664
EP  - 673
DO  - 10.1109/VR51125.2022.00087
PB  - IEEE
ER  - 

Plain Text

Qi Feng, Hubert P. H. Shum and Shigeo Morishima, "360 Depth Estimation in the Wild - The Depth360 Dataset and the SegFuse Network," in VR '22: Proceedings of the 2022 IEEE Conference on Virtual Reality and 3D User Interfaces, pp. 664-673, IEEE, Mar 2022.

Supporting Grants

Similar Research

Qi Feng, Hubert P. H. Shum and Shigeo Morishima, "Foreground-Aware Dense Depth Estimation for 360 Images", Journal of WSCG - Proceedings of the 2020 International Conferences in Central Europe on Computer Graphics, Visualization and Computer Vision (WSCG), 2020
Qi Feng, Hubert P. H. Shum and Shigeo Morishima, "Bi-Projection Based Foreground-Aware Omnidirectional Depth Prediction", Proceedings of the 2021 Visual Computing (VC), 2021
Li Li, Khalid N. Ismail, Hubert P. H. Shum and Toby P. Breckon, "DurLAR: A High-fidelity 128-Channel LiDAR Dataset with Panoramic Ambientand Reflectivity Imagery for Multi-Modal Autonomous Driving Applications", Proceedings of the 2021 International Conference on 3D Vision (3DV), 2021
Qi Feng, Hubert P. H. Shum and Shigeo Morishima, "Enhancing Perception and Immersion in Pre-Captured Environments through Learning-Based Eye Height Adaptation", Proceedings of the 2023 IEEE International Symposium on Mixed and Augmented Reality (ISMAR), 2023
Li Li, Hubert P. H. Shum and Toby P. Breckon, "Less is More: Reducing Task and Model Complexity for 3D Point Cloud Semantic Segmentation", Proceedings of the 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2023

 

 

Last updated on 14 April 2024
RSS Feed