Incremental Dense Reconstruction From Monocular Video With Guided Sparse Feature Volume Fusion

Author(s)Zuo, Xingxing
Author(s)Yang, Nan
Author(s)Merrill, Nathaniel
Author(s)Xu, Binbin
Author(s)Leutenegger, Stefan
Date Accessioned2023-07-11T18:07:45Z
Date Available2023-07-11T18:07:45Z
Publication Date2023-05-08
DescriptionThis article was originally published in IEEE Robotics and Automation Letters. The version of record is available at: https://doi.org/10.1109/LRA.2023.3273509. © 2023 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.
AbstractIncrementally recovering 3D dense structures from monocular videos is of paramount importance since it enables various robotics and AR applications. Feature volumes have recently been shown to enable efficient and accurate incremental dense reconstruction without the need to first estimate depth, but they are not able to achieve as high of a resolution as depth-based methods due to the large memory consumption of high-resolution feature volumes. This letter proposes a real-time feature volume-based dense reconstruction method that predicts TSDF (Truncated Signed Distance Function) values from a novel sparsified deep feature volume, which is able to achieve higher resolutions than previous feature volume-based methods, and is favorable in outdoor large-scale scenarios where the majority of voxels are empty. An uncertainty-aware multi-view stereo (MVS) network is leveraged to infer initial voxel locations of the physical surface in a sparse feature volume. Then for refining the recovered 3D geometry, deep features are attentively aggregated from multi-view images at potential surface locations, and temporally fused. Besides achieving higher resolutions than before, our method is shown to produce more complete reconstructions with finer detail in many cases. Extensive evaluations on both public and self-collected datasets demonstrate a very competitive real-time reconstruction result for our method compared to state-of-the-art reconstruction methods in both indoor and outdoor settings.
SponsorMunich Center for Machine Learning 10.13039/501100005713- Technische Universität München
CitationX. Zuo, N. Yang, N. Merrill, B. Xu and S. Leutenegger, "Incremental Dense Reconstruction From Monocular Video With Guided Sparse Feature Volume Fusion," in IEEE Robotics and Automation Letters, vol. 8, no. 6, pp. 3876-3883, June 2023, doi: 10.1109/LRA.2023.3273509.
ISSN2377-3766
URLhttps://udspace.udel.edu/handle/19716/32986
Languageen_US
PublisherIEEE Robotics and Automation Letters
KeywordsMonocular dense mapping
Keywordsneural implicit representation
Keywordsfeature volume fusion
TitleIncremental Dense Reconstruction From Monocular Video With Guided Sparse Feature Volume Fusion
TypeArticle
Files
Original bundle
Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
Incremental Dense Reconstruction from Monocular Video with Guided Sparse Feature Volume Fusion.pdf
Size:
10.42 MB
Format:
Adobe Portable Document Format
Description:
Main article
License bundle
Now showing 1 - 1 of 1
No Thumbnail Available
Name:
license.txt
Size:
2.22 KB
Format:
Item-specific license agreed upon to submission
Description: