This is a collection of documents and topics NeRF/3DGS & Beyond channel accumulated, as well as papers in literaure. Since there are lots of papers out there, so we split them into two seperate repositories: NeRF and Beyond Docs and 3DGS and Beyond Docs. Please choose accordingly recarding to your preference.
Some papers we discussed in the group, will be added to the back of the paper with a Notes link. You can follow the link to check whether there is topic you are interested in. If not, welcome to join us and ask the question to the crowd. The mighty community might have your answers.
We are actively maintaining this page trying to stay up-to-date and gather important works in a daily basis. We would also like to put as many notes as possible to some works, trying to make it easier to catch up.
Please feel free to join us on WeChat group or start a discussion topic here.
I have recently published a book with PHEI(Publishing House of Electronics Industry) on NeRF/3DGS. This would not have been possible without the help of the whole 3D vision community. It is now available on jd.com (Checkout here) and it should be suitable as a reference handbook for NeRF/3DGS beginners or engineers in related areas. I sincerely hope the book can be helpful in any perspective.
For those of you who have already purchased the book, all references can be downloaded HERE. If you experience any issue reading the book or have any suggestions to improve it, please contact me through my email address: [email protected]
, or directly concact me on WeChat: jiheng_yang
. I'm looking forward to talk to anyone reaching out to me, thanks in advance.
For now, you can join us in the following ways
- Bilibili Channel where we post near daily updates (primarily) on NeRF.
- WeChat group, due to the limitation of WeChat group, you can add my personal account:
jiheng_yang
, and I will add you to the chat groups. - If you want to view this from a timeline perspective, please refer to this ProcessOn Diagram
- If you think something is not correct or you think we could do better in some way, please write to us through all possible channels or drop an issue. All suggestions are appreciated!
- For other discussed techniques that's related to 3D reconstruction and NeRF, please refer to link, we are constantly trying to add more resource to this document.
- We are trying to gradually setup Discord channels, join the Discord Channel if you want to, we will certainly look forward to talk with you guys!
For 3DGS related progress, you can refer to 3DGS and Beyond Docs
- NeRF and Beyond Docs
- NeRF/3DGS Book
- How to join us
- 3DGS Progresses
- NeRF progresses
- New to NeRF
- NeRF Fundamental Enhancements
- Depth Supervised Reconstruction
- Activation Function Optimization
- Positional Encoding
- Deformable & Dynamic NeRF
- NeRF Training and Rendering Speed Enhancements
- One-Shot/Few-Shot Sparse View Reconstruction
- NeRF-3DGS Transfer
- NeRF Based SLAM
- Camera Pose Estimation & Weak Camera Pose Reconstruction
- NeRF with MVS
- NeRF AIGC
- Generalization
- Model Compression
- NeRF Based 2D High Quality Image Synthesis
- SDF Based Reconstruction / Other Geometry Based Reconstruction
- NeRF + Hardware Optimization/Acceleration
- NeRF + Light Field Rendering
- NeRF + Point Cloud / LiDAR
- NeRF + Auto Data Collection
- NeRF + Avatar/Talking Head
- NeRF + Imaging Tasks
- NeRF + Super-resolution
- NeRF + Indoor Scenes
- NeRF + Large Scale Scenes & Urban Scenes
- NeRF + Autonomous Driving
- NeRF + Editing
- NeRF + Relighting
- NeRF + Open Surface Reconstruction and Cloth Simulation
- NeRF + Segmentation
- NeRF + Multi-Modal
- NeRF + Semantic/Understanding
- NeRF + Mesh Extraction
- NeRF + Codec/Streaming
- NeRF + Model Conversion
- NeRF + Medical/Biology
- NeRF + Inverse Rendering
- NeRF + Texture Synthesis
- NeRF + Robotics
- NeRF + Transparent and Specular
- Other 3D Generative Work
- NeRF + Other applications
- NeRF + Gaming
- NeRF + Quality Metric
- NeRF + CAD
- NeRF + GIS
- NeRF + Terrain
- NeRF + Satellite Images / Radar
- NeRF + UAV/Drone
- NeRF + Copyright protection and Security
- NeRF + Motion Detection
- NeRF Defect Detection
- Datasets
- Neural Surface Reconstruction
- Other Important Related Work
- New Ideas
- Contributors
- License
🔥NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis
Ben Mildenhall, Pratul P. Srinivasan, Matthew Tancik, Jonathan T. Barron, Ravi Ramamoorthi, Ren Ng
ECCV 2020, 19 Mar 2020
Abstract
We present a method that achieves state-of-the-art results for synthesizing novel views of complex scenes by optimizing an underlying continuous volumetric scene function using a sparse set of input views. Our algorithm represents a scene using a fully-connected (non-convolutional) deep network, whose input is a single continuous 5D coordinate (spatial location (x,y,z) and viewing direction (θ,ϕ)) and whose output is the volume density and view-dependent emitted radiance at that spatial location. We synthesize views by querying 5D coordinates along camera rays and use classic volume rendering techniques to project the output colors and densities into an image. Because volume rendering is naturally differentiable, the only input required to optimize our representation is a set of images with known camera poses. We describe how to effectively optimize neural radiance fields to render photorealistic novel views of scenes with complicated geometry and appearance, and demonstrate results that outperform prior work on neural rendering and view synthesis. View synthesis results are best viewed as videos, so we urge readers to view our supplementary video for convincing comparisons.[arXiv] [Project] [Code] [PyTorch Impl] [Notes]
🔥State of the Art on Neural Rendering
Ayush Tewari, Ohad Fried, Justus Thies, Vincent Sitzmann, Stephen Lombardi, Kalyan Sunkavalli, Ricardo Martin-Brualla, Tomas Simon, Jason Saragih, Matthias Nießner, Rohit Pandey, Sean Fanello, Gordon Wetzstein, Jun-Yan Zhu, Christian Theobalt, Maneesh Agrawala, Eli Shechtman, Dan B Goldman, Michael Zollhöfer
ECCV 2020,8 Apr 2020
Abstract
Efficient rendering of photo-realistic virtual worlds is a long standing effort of computer graphics. Modern graphics techniques have succeeded in synthesizing photo-realistic images from hand-crafted scene representations. However, the automatic generation of shape, materials, lighting, and other aspects of scenes remains a challenging problem that, if solved, would make photo-realistic computer graphics more widely accessible. Concurrently, progress in computer vision and machine learning have given rise to a new approach to image synthesis and editing, namely deep generative models. Neural rendering is a new and rapidly emerging field that combines generative machine learning techniques with physical knowledge from computer graphics, e.g., by the integration of differentiable rendering into network training. With a plethora of applications in computer graphics and vision, neural rendering is poised to become a new area in the graphics community, yet no survey of this emerging field exists. This state-of-the-art report summarizes the recent trends and applications of neural rendering. We focus on approaches that combine classic computer graphics techniques with deep generative models to obtain controllable and photo-realistic outputs. Starting with an overview of the underlying computer graphics and machine learning concepts, we discuss critical aspects of neural rendering approaches. This state-of-the-art report is focused on the many important use cases for the described algorithms such as novel view synthesis, semantic photo manipulation, facial and body reenactment, relighting, free-viewpoint video, and the creation of photo-realistic avatars for virtual and augmented reality telepresence. Finally, we conclude with a discussion of the social implications of such technology and investigate open research problems.[arXiv]
🔥Advances in Neural Rendering
Ayush Tewari, Justus Thies, Ben Mildenhall, Pratul Srinivasan, Edgar Tretschk, Yifan Wang, Christoph Lassner, Vincent Sitzmann, Ricardo Martin-Brualla, Stephen Lombardi, Tomas Simon, Christian Theobalt, Matthias Niessner, Jonathan T. Barron, Gordon Wetzstein, Michael Zollhoefer, Vladislav Golyanik
ECCV 2022, 10 Nov 2021
Abstract
Synthesizing photo-realistic images and videos is at the heart of computer graphics and has been the focus of decades of research. Traditionally, synthetic images of a scene are generated using rendering algorithms such as rasterization or ray tracing, which take specifically defined representations of geometry and material properties as input. Collectively, these inputs define the actual scene and what is rendered, and are referred to as the scene representation (where a scene consists of one or more objects). Example scene representations are triangle meshes with accompanied textures (e.g., created by an artist), point clouds (e.g., from a depth sensor), volumetric grids (e.g., from a CT scan), or implicit surface functions (e.g., truncated signed distance fields). The reconstruction of such a scene representation from observations using differentiable rendering losses is known as inverse graphics or inverse rendering. Neural rendering is closely related, and combines ideas from classical computer graphics and machine learning to create algorithms for synthesizing images from real-world observations. Neural rendering is a leap forward towards the goal of synthesizing photo-realistic image and video content. In recent years, we have seen immense progress in this field through hundreds of publications that show different ways to inject learnable components into the rendering pipeline. This state-of-the-art report on advances in neural rendering focuses on methods that combine classical rendering principles with learned 3D scene representations, often now referred to as neural scene representations. A key advantage of these methods is that they are 3D-consistent by design, enabling applications such as novel viewpoint synthesis of a captured scene. In addition to methods that handle static scenes, we cover neural scene representations for modeling non-rigidly deforming objects...[arXiv]
🔥NeRF: Neural Radiance Field in 3D Vision, A Comprehensive Review
Kyle Gao, Yina Gao, Hongjie He, Dening Lu, Linlin Xu, Jonathan Li
TPAMI 2022, 1 Oct 2022
Abstract
Neural Radiance Field (NeRF) has recently become a significant development in the field of Computer Vision, allowing for implicit, neural network-based scene representation and novel view synthesis. NeRF models have found diverse applications in robotics, urban mapping, autonomous navigation, virtual reality/augmented reality, and more. Due to the growing popularity of NeRF and its expanding research area, we present a comprehensive survey of NeRF papers from the past two years. Our survey is organized into architecture and application-based taxonomies and provides an introduction to the theory of NeRF and its training via differentiable volume rendering. We also present a benchmark comparison of the performance and speed of key NeRF models. By creating this survey, we hope to introduce new researchers to NeRF, provide a helpful reference for influential works in this field, as well as motivate future research directions with our discussion section.[arXiv]
Neural Radiance Fields: Past, Present, and Future
Ansh Mittal
In Progress,20 Apr 2023
[arXiv]
Survey on Fundamental Deep Learning 3D Reconstruction Techniques
Yonge Bai, LikHang Wong, TszYin Twan
arXiv preprint, 11 Jul 2024
[arXiv]
3D Representation Methods: A Survey
Zhengren Wang
arXiv preprint, 9 Oct 2024
[arXiv]
🔥Neural Rendering Course
SIGGRAPH 2021 [BiliBili]
🔥Neural Volumetric Rendering for Computer Vision
ECCV 2022 Tutorial [Website]
🔥Scaling NeRF Up and Down: Big Scenes and Real-Time View Synthesis
I3D 2023 Keynote [Video]
NerfBaselines: Consistent and Reproducible Evaluation of Novel View Synthesis Methods
Jonas Kulhanek, Torsten Sattler
arXiv preprint, 25 Jun 2024
[arXiv] [Project]
🔥Nerfstudio: A Modular Framework for Neural Radiance Field Development
Matthew Tancik, Ethan Weber, Evonne Ng, Ruilong Li, Brent Yi, Justin Kerr, Terrance Wang, Alexander Kristoffersen, Jake Austin, Kamyar Salahi, Abhik Ahuja, David McAllister, Angjoo Kanazawa
arXiv preprint, 8 Feb 2023
Nerfstudio provides a simple API that allows for a simplified end-to-end process of creating, training, and visualizing NeRFs. The library supports an interpretable implementation of NeRFs by modularizing each component.
🔥NerfAcc: Efficient Sampling Accelerates NeRFs
Ruilong Li, Hang Gao, Matthew Tancik, Angjoo Kanazawa
arXiv preprint, 8 May 2023
NerfAcc is a PyTorch Nerf acceleration toolbox for both training and inference. It focus on efficient sampling in the volumetric rendering pipeline of radiance fields, which is universal and plug-and-play for most of the NeRFs. With minimal modifications to the existing codebases, Nerfacc provides significant speedups in training various recent NeRF papers. And it is pure Python interface with flexible APIs!
🔥threestudio: A unified framework for 3D content generation
Yuan-Chen Guo and Ying-Tian Liu and Chen Wang and Zi-Xin Zou and Guan Luo and Chia-Hao Chen and Yan-Pei Cao and Song-Hai Zhang
Github repo,2023
threestudio is a unified framework for 3D content creation from text prompts, single images, and few-shot images, by lifting 2D text-to-image generation models.
[Github]
🔥NeRF in the Wild: Neural Radiance Fields for Unconstrained Photo Collections
Ricardo Martin-Brualla, Noha Radwan, Mehdi S. M. Sajjadi, Jonathan T. Barron, Alexey Dosovitskiy, Daniel Duckworth
CVPR 2021, 5 Aug 2020
Abstract
We present a learning-based method for synthesizing novel views of complex scenes using only unstructured collections of in-the-wild photographs. We build on Neural Radiance Fields (NeRF), which uses the weights of a multilayer perceptron to model the density and color of a scene as a function of 3D coordinates. While NeRF works well on images of static subjects captured under controlled settings, it is incapable of modeling many ubiquitous, real-world phenomena in uncontrolled images, such as variable illumination or transient occluders. We introduce a series of extensions to NeRF to address these issues, thereby enabling accurate reconstructions from unstructured image collections taken from the internet. We apply our system, dubbed NeRF-W, to internet photo collections of famous landmarks, and demonstrate temporally consistent novel view renderings that are significantly closer to photorealism than the prior state of the art.🔥Mip-NeRF: A Multiscale Representation for Anti-Aliasing Neural Radiance Fields
Jonathan T. Barron, Ben Mildenhall, Matthew Tancik, Peter Hedman, Ricardo Martin-Brualla, Pratul P. Srinivasan
ICCV 2021, 24 Mar 2021
Abstract
The rendering procedure used by neural radiance fields (NeRF) samples a scene with a single ray per pixel and may therefore produce renderings that are excessively blurred or aliased when training or testing images observe scene content at different resolutions. The straightforward solution of supersampling by rendering with multiple rays per pixel is impractical for NeRF, because rendering each ray requires querying a multilayer perceptron hundreds of times. Our solution, which we call "mip-NeRF" (a la "mipmap"), extends NeRF to represent the scene at a continuously-valued scale. By efficiently rendering anti-aliased conical frustums instead of rays, mip-NeRF reduces objectionable aliasing artifacts and significantly improves NeRF's ability to represent fine details, while also being 7% faster than NeRF and half the size. Compared to NeRF, mip-NeRF reduces average error rates by 17% on the dataset presented with NeRF and by 60% on a challenging multiscale variant of that dataset that we present. Mip-NeRF is also able to match the accuracy of a brute-force supersampled NeRF on our multiscale dataset while being 22x faster.[arXiv] [Project] [Github] [Notes]
🔥Mip-NeRF 360: Unbounded Anti-Aliased Neural Radiance Fields
Jonathan T. Barron, Ben Mildenhall, Dor Verbin, Pratul P. Srinivasan, Peter Hedman
CVPR 2022, 23 Nov 2021
Abstract
Though neural radiance fields (NeRF) have demonstrated impressive view synthesis results on objects and small bounded regions of space, they struggle on "unbounded" scenes, where the camera may point in any direction and content may exist at any distance. In this setting, existing NeRF-like models often produce blurry or low-resolution renderings (due to the unbalanced detail and scale of nearby and distant objects), are slow to train, and may exhibit artifacts due to the inherent ambiguity of the task of reconstructing a large scene from a small set of images. We present an extension of mip-NeRF (a NeRF variant that addresses sampling and aliasing) that uses a non-linear scene parameterization, online distillation, and a novel distortion-based regularizer to overcome the challenges presented by unbounded scenes. Our model, which we dub "mip-NeRF 360" as we target scenes in which the camera rotates 360 degrees around a point, reduces mean-squared error by 57% compared to mip-NeRF, and is able to produce realistic synthesized views and detailed depth maps for highly intricate, unbounded real-world scenes.[arXiv] [Project] [Github] [Notes]
🔥Ref-NeRF: Structured View-Dependent Appearance for Neural Radiance Fields
Dor Verbin, Peter Hedman, Ben Mildenhall, Todd Zickler, Jonathan T. Barron, Pratul P. Srinivasan
CVPR 2022, 7 Dec 2021
Abstract
Neural Radiance Fields (NeRF) is a popular view synthesis technique that represents a scene as a continuous volumetric function, parameterized by multilayer perceptrons that provide the volume density and view-dependent emitted radiance at each location. While NeRF-based techniques excel at representing fine geometric structures with smoothly varying view-dependent appearance, they often fail to accurately capture and reproduce the appearance of glossy surfaces. We address this limitation by introducing Ref-NeRF, which replaces NeRF's parameterization of view-dependent outgoing radiance with a representation of reflected radiance and structures this function using a collection of spatially-varying scene properties. We show that together with a regularizer on normal vectors, our model significantly improves the realism and accuracy of specular reflections. Furthermore, we show that our model's internal representation of outgoing radiance is interpretable and useful for scene editing.🔥PS-NeRF: Neural Inverse Rendering for Multi-view Photometric Stereo
Wenqi Yang, Guanying Chen, Chaofeng Chen, Zhenfang Chen, Kwan-Yee K. Wong
ECCV 2022, 23 Jul 2022
Abstract
Traditional multi-view photometric stereo (MVPS) methods are often composed of multiple disjoint stages, resulting in noticeable accumulated errors. In this paper, we present a neural inverse rendering method for MVPS based on implicit representation. Given multi-view images of a non-Lambertian object illuminated by multiple unknown directional lights, our method jointly estimates the geometry, materials, and lights. Our method first employs multi-light images to estimate per-view surface normal maps, which are used to regularize the normals derived from the neural radiance field. It then jointly optimizes the surface normals, spatially-varying BRDFs, and lights based on a shadow-aware differentiable rendering layer. After optimization, the reconstructed object can be used for novel-view rendering, relighting, and material editing. Experiments on both synthetic and real datasets demonstrate that our method achieves far more accurate shape reconstruction than existing MVPS and neural rendering methods. Our code and model can be found at this https URL.4K-NeRF: High Fidelity Neural Radiance Fields at Ultra High Resolutions
Zhongshu Wang, Lingzhi Li, Zhen Shen, Li Shen, Liefeng Bo
arXiv preprint, 9 Dec 2022
[arXiv] [Project] [Github]
🔥Zip-NeRF: Anti-Aliased Grid-Based Neural Radiance Fields
Jonathan T. Barron, Ben Mildenhall, Dor Verbin, Pratul P. Srinivasan, Peter Hedman
arXiv preprint, 13 Apr 2023
Abstract
Neural Radiance Field training can be accelerated through the use of grid-based representations in NeRF's learned mapping from spatial coordinates to colors and volumetric density. However, these grid-based approaches lack an explicit understanding of scale and therefore often introduce aliasing, usually in the form of jaggies or missing scene content. Anti-aliasing has previously been addressed by mip-NeRF 360, which reasons about sub-volumes along a cone rather than points along a ray, but this approach is not natively compatible with current grid-based techniques. We show how ideas from rendering and signal processing can be used to construct a technique that combines mip-NeRF 360 and grid-based models such as Instant NGP to yield error rates that are 8% - 77% lower than either prior technique, and that trains 24x faster than mip-NeRF 360.[arXiv] [Project] [Unofficial Impl] [Notes]
Nerfbusters: Removing Ghostly Artifacts from Casually Captured NeRFs
Frederik Warburg, Ethan Weber, Matthew Tancik, Aleksander Holynski, Angjoo Kanazawa
arXiv preprint, 20 Apr 2023
[arXiv] [Project] [Github] [Video]
Multi-Space Neural Radiance Fields
Ze-Xin Yin, Jiaxiong Qiu, Ming-Ming Cheng, Bo Ren
CVPR 2023, 7 May 2023
[arXiv] [Project] [Github] [Video] [Notes]
NeuManifold: Neural Watertight Manifold Reconstruction with Efficient and High-Quality Rendering Support
Xinyue Wei, Fanbo Xiang, Sai Bi, Anpei Chen, Kalyan Sunkavalli, Zexiang Xu, Hao Su
arXiv preprint, 26 May 2023
[arXiv] [Project] [Video]
Analyzing the Internals of Neural Radiance Fields
Lukas Radl, Andreas Kurz, Markus Steinberger
arXiv preprint, 1 Jun 2023
[arXiv] [Project]
GANeRF: Leveraging Discriminators to Optimize Neural Radiance Fields
Barbara Roessle, Norman Müller, Lorenzo Porzi, Samuel Rota Bulò, Peter Kontschieder, Matthias Nießner
arXiv preprint, 9 Jun 2023
[arXiv] [Video]
HyP-NeRF: Learning Improved NeRF Priors using a HyperNetwork
Bipasha Sen, Gaurav Singh, Aditya Agarwal, Rohith Agaram, K Madhava Krishna, Srinath Sridhar
arXiv preprint, 9 Jun 2023
[arXiv]
🔥Tri-MipRF: Tri-Mip Representation for Efficient Anti-Aliasing Neural Radiance Fields
Wenbo Hu, Yuling Wang, Lin Ma, Bangbang Yang, Lin Gao, Xiao Liu, Yuewen Ma
ICCV 2023, 21 Jul 2023
Abstract
Despite the tremendous progress in neural radiance fields (NeRF), we still face a dilemma of the trade-off between quality and efficiency, e.g., MipNeRF presents fine-detailed and anti-aliased renderings but takes days for training, while Instant-ngp can accomplish the reconstruction in a few minutes but suffers from blurring or aliasing when rendering at various distances or resolutions due to ignoring the sampling area. To this end, we propose a novel Tri-Mip encoding that enables both instant reconstruction and anti-aliased high-fidelity rendering for neural radiance fields. The key is to factorize the pre-filtered 3D feature spaces in three orthogonal mipmaps. In this way, we can efficiently perform 3D area sampling by taking advantage of 2D pre-filtered feature maps, which significantly elevates the rendering quality without sacrificing efficiency. To cope with the novel Tri-Mip representation, we propose a cone-casting rendering technique to efficiently sample anti-aliased 3D features with the Tri-Mip encoding considering both pixel imaging and observing distance. Extensive experiments on both synthetic and real-world datasets demonstrate our method achieves state-of-the-art rendering quality and reconstruction speed while maintaining a compact representation that reduces 25% model size compared against Instant-ngp.Bayes' Rays: Uncertainty Quantification for Neural Radiance Fields
Lily Goli, Cody Reading, Silvia Selllán, Alec Jacobson, Andrea Tagliasacchi
arXiv preprint, 6 Sep 2023
[arXiv] [Project]
ResFields: Residual Neural Fields for Spatiotemporal Signals
Marko Mihajlovic, Sergey Prokudin, Marc Pollefeys, Siyu Tang
arXiv preprint, 1 Oct 2023
[arXiv] [Project] [Github]
🔥NeuRBF: A Neural Fields Representation with Adaptive Radial Basis Functions
Zhang Chen, Zhong Li, Liangchen Song, Lele Chen, Jingyi Yu, Junsong Yuan, Yi Xu
ICCV 2023, 27 Sep 2023
Abstract
We present a novel type of neural fields that uses general radial bases for signal representation. State-of-the-art neural fields typically rely on grid-based representations for storing local neural features and N-dimensional linear kernels for interpolating features at continuous query points. The spatial positions of their neural features are fixed on grid nodes and cannot well adapt to target signals. Our method instead builds upon general radial bases with flexible kernel position and shape, which have higher spatial adaptivity and can more closely fit target signals. To further improve the channel-wise capacity of radial basis functions, we propose to compose them with multi-frequency sinusoid functions. This technique extends a radial basis to multiple Fourier radial bases of different frequency bands without requiring extra parameters, facilitating the representation of details. Moreover, by marrying adaptive radial bases with grid-based ones, our hybrid combination inherits both adaptivity and interpolation smoothness. We carefully designed weighting schemes to let radial bases adapt to different types of signals effectively. Our experiments on 2D image and 3D signed distance field representation demonstrate the higher accuracy and compactness of our method than prior arts. When applied to neural radiance field reconstruction, our method achieves state-of-the-art rendering quality, with small model size and comparable training speed.Multi-task View Synthesis with Neural Radiance Fields
Shuhong Zheng, Zhipeng Bao, Martial Hebert, Yu-Xiong Wang
ICCV 2023, 29 Sep 2023
[arXiv] [Project] [Github]
Hyb-NeRF: A Multiresolution Hybrid Encoding for Neural Radiance Fields
Yifan Wang, Yi Gong, Yuan Zeng
arXiv preprint, 21 Nov 2023
[arXiv]
VQ-NeRF: Vector Quantization Enhances Implicit Neural Representations
Yiying Yang, Wen Liu, Fukun Yin, Xin Chen, Gang Yu, Jiayuan Fan, Tao Chen
AAAI 2024, 23 Oct 2023
[arXiv]
Rethinking Directional Integration in Neural Radiance Fields
Congyue Deng, Jiawei Yang, Leonidas Guibas, Yue Wang
arXiv preprint, 28 Nov, 2023
[arXiv]
RING-NeRF: A Versatile Architecture based on Residual Implicit Neural Grids
Doriand Petit, Steve Bourgeois, Dumitru Pavel, Vincent Gay-Bellile, Florian Chabot, Loic Barthe
arXiv preprint, 6 Dec, 2023
[arXiv]
Methods and strategies for improving the novel view synthesis quality of neural radiation field
Shun Fang, Ming Cui, Xing Feng, Yanna Lv
arXiv preprint, 23 Jan 2024
[arXiv]
Divide and Conquer: Rethinking the Training Paradigm of Neural Radiance Fields
Rongkai Ma, Leo Lebrat, Rodrigo Santa Cruz, Gil Avraham, Yan Zuo, Clinton Fookes, Olivier Salvado
arXiv preprint, 29 Jan 2024
[arXiv]
TaylorGrid: Towards Fast and High-Quality Implicit Field Learning via Direct Taylor-based Grid Optimization
Renyi Mao, Qingshan Xu, Peng Zheng, Ye Wang, Tieru Wu, Rui Ma
arXiv preprint, 22 Feb 2024
[arXiv]
Mip-Grid: Anti-aliased Grid Representations for Neural Radiance Fields
Seungtae Nam, Daniel Rho, Jong Hwan Ko, Eunbyung Park
NeurIPS 2023, 22 Feb 2024
[arXiv] [Project] [Code]
NeRF-VPT: Learning Novel View Representations with Neural Radiance Fields via View Prompt Tuning
Linsheng Chen, Guangrun Wang, Liuchun Yuan, Keze Wang, Ken Deng, Philip H.S. Torr
AAAI 2024, 2 Mar 2024
[arXiv] [Code]
RoGUENeRF: A Robust Geometry-Consistent Universal Enhancer for NeRF
Sibi Catley-Chandar, Richard Shaw, Gregory Slabaugh, Eduardo Perez-Pellitero
arXiv preprint, 18 Mar 2024
[arXiv]
Mesh2NeRF: Direct Mesh Supervision for Neural Radiance Field Representation and Generation
Yujin Chen, Yinyu Nie, Benjamin Ummenhofer, Reiner Birkl, Michael Paulitsch, Matthias Müller, Matthias Nießner
arXiv preprint, 28 Mar 2024
[arXiv] [Project] [Video]
Sine Activated Low-Rank Matrices for Parameter Efficient Learning
Yiping Ji, Hemanth Saratchandran, Cameron Gordon, Zeyu Zhang, Simon Lucey
arXiv preprint, 28 Mar 2024
[arXiv]
Neural Radiance Fields with Torch Units
Bingnan Ni, Huanyu Wang, Dongfeng Bai, Minghe Weng, Dexin Qi, Weichao Qiu, Bingbing Liu
arXiv preprint, 3 Apr 2024
[arXiv]
Alpha Invariance: On Inverse Scaling Between Distance and Volume Density in Neural Radiance Fields
Joshua Ahn, Haochen Wang, Raymond A. Yeh, Greg Shakhnarovich
CVPR 2024, 2 Apr 2024
[arXiv] [Project]
RaFE: Generative Radiance Fields Restoration
Zhongkai Wu, Ziyu Wan, Jing Zhang, Jing Liao, Dong Xu
arXiv preprint, 4 Apr 2024
[arXiv] [Project] [Code] [Video]
Bayesian NeRF: Quantifying Uncertainty with Volume Density in Neural Radiance Fields
Sibeak Lee, Kyeongsu Kang, Hyeonwoo Yu
arXiv preprint, 10 Apr 2024
[arXiv]
🔥Rip-NeRF: Anti-aliasing Radiance Fields with Ripmap-Encoded Platonic Solids
Junchen Liu, Wenbo Hu, Zhuo Yang, Jianteng Chen, Guoliang Wang, Xiaoxue Chen, Yantong Cai, Huan-ang Gao, Hao Zhao
SIGGRAPH 2024, 3 May 2024
Abstract
Despite significant advancements in Neural Radiance Fields (NeRFs), the renderings may still suffer from aliasing and blurring artifacts, since it remains a fundamental challenge to effectively and efficiently characterize anisotropic areas induced by the cone-casting procedure. This paper introduces a Ripmap-Encoded Platonic Solid representation to precisely and efficiently featurize 3D anisotropic areas, achieving high-fidelity anti-aliasing renderings. Central to our approach are two key components: Platonic Solid Projection and Ripmap encoding. The Platonic Solid Projection factorizes the 3D space onto the unparalleled faces of a certain Platonic solid, such that the anisotropic 3D areas can be projected onto planes with distinguishable characterization. Meanwhile, each face of the Platonic solid is encoded by the Ripmap encoding, which is constructed by anisotropically pre-filtering a learnable feature grid, to enable featurzing the projected anisotropic areas both precisely and efficiently by the anisotropic area-sampling. Extensive experiments on both well-established synthetic datasets and a newly captured real-world dataset demonstrate that our Rip-NeRF attains state-of-the-art rendering quality, particularly excelling in the fine details of repetitive structures and textures, while maintaining relatively swift training times.[arXiv] [Project] [Code] [Video]
DistGrid: Scalable Scene Reconstruction with Distributed Multi-resolution Hash Grid
Sidun Liu, Peng Qiao, Zongxin Ye, Wenyu Li, Yong Dou
Siggraph Asia 2023, 7 May 2024
[arXiv]
NPLMV-PS: Neural Point-Light Multi-View Photometric Stereo
Fotios Logothetis, Ignas Budvytis, Roberto Cipolla
arXiv preprint, 20 May 2024
[arXiv]
PruNeRF: Segment-Centric Dataset Pruning via 3D Spatial Consistency
Yeonsung Jung, Heecheol Yun, Joonhyung Park, Jin-Hwa Kim, Eunho Yang
arXiv preprint, 2 Jun 2024
[arXiv]
NeRF Director: Revisiting View Selection in Neural Volume Rendering
Wenhui Xiao, Rodrigo Santa Cruz, David Ahmedt-Aristizabal, Olivier Salvado, Clinton Fookes, Leo Lebrat
CVPR 2024, 13 Jun 2024
[arXiv]
InterNeRF: Scaling Radiance Fields via Parameter Interpolation
Clinton Wang, Peter Hedman, Polina Golland, Jonathan T. Barron, Daniel Duckworth
CVPR 2024 Neural Rendering Intelligence Workshop, 17 Jun 2024
[arXiv]
Uncertainty modeling for fine-tuned implicit functions
Anna Susmelj, Mael Macuglia, Nataša Tagasovska, Reto Sutter, Sebastiano Caprara, Jean-Philippe Thiran, Ender Konukoglu
arXiv preprint, 17 Jun 2024
[arXiv]
Matching Query Image Against Selected NeRF Feature for Efficient and Scalable Localization
Huaiji Zhou, Bing Wang, Changhao Chen
arXiv preprint, 17 Jun 2024
[arXiv]
Federated Neural Radiance Field for Distributed Intelligence
Yintian Zhang, Ziyu Shao
arXiv preprint, 15 Jun 2024
[arXiv]
Drantal-NeRF: Diffusion-Based Restoration for Anti-aliasing Neural Radiance Field
Ganlin Yang, Kaidong Zhang, Jingjing Fu, Dong Liu
arXiv preprint, 10 Jul 2024
[arXiv]
RS-NeRF: Neural Radiance Fields from Rolling Shutter Images
Muyao Niu, Tong Chen, Yifan Zhan, Zhuoxiao Li, Xiang Ji, Yinqiang Zheng
ECCV 2024, 14 Jul 2024
[arXiv] [Code]
Efficient NeRF Optimization -- Not All Samples Remain Equally Hard
Juuso Korhonen, Goutham Rangu, Hamed R. Tavakoli, Juho Kannala
arXiv preprint, 6 Aug 2024
[arXiv]
Magnituder Layers for Implicit Neural Representations in 3D
Sang Min Kim, Byeongchan Kim, Arijit Sehanobish Krzysztof Choromanski, Dongseok Shim, Avinava Dubey, Min-hwan Oh
arXiv preprint, 13 Oct 2024
[arXiv]
Bringing NeRFs to the Latent Space: Inverse Graphics Autoencoder
Antoine Schnepf, Karim Kassab, Jean-Yves Franceschi, Laurent Caraffa, Flavian Vasile, Jeremie Mary, Andrew Comport, Valerie Gouet-Brunet
arXiv preprint, 30 Oct 2024
[arXiv] [Project]
🔥Depth-supervised NeRF: Fewer Views and Faster Training for Free
Kangle Deng, Andrew Liu, Jun-Yan Zhu, Deva Ramanan
CVPR 2022, 6 Jul 2021
Abstract
A commonly observed failure mode of Neural Radiance Field (NeRF) is fitting incorrect geometries when given an insufficient number of input views. One potential reason is that standard volumetric rendering does not enforce the constraint that most of a scene's geometry consist of empty space and opaque surfaces. We formalize the above assumption through DS-NeRF (Depth-supervised Neural Radiance Fields), a loss for learning radiance fields that takes advantage of readily-available depth supervision. We leverage the fact that current NeRF pipelines require images with known camera poses that are typically estimated by running structure-from-motion (SFM). Crucially, SFM also produces sparse 3D points that can be used as "free" depth supervision during training: we add a loss to encourage the distribution of a ray's terminating depth matches a given 3D keypoint, incorporating depth uncertainty. DS-NeRF can render better images given fewer training views while training 2-3x faster. Further, we show that our loss is compatible with other recently proposed NeRF methods, demonstrating that depth is a cheap and easily digestible supervisory signal. And finally, we find that DS-NeRF can support other types of depth supervision such as scanned depth sensors and RGB-D reconstruction outputs.🔥NerfingMVS: Guided Optimization of Neural Radiance Fields for Indoor Multi-view Stereo
Yi Wei, Shaohui Liu, Yongming Rao, Wang Zhao, Jiwen Lu, Jie Zhou
ICCV 2021, 2 Sep 2021
Abstract
In this work, we present a new multi-view depth estimation method that utilizes both conventional reconstruction and learning-based priors over the recently proposed neural radiance fields (NeRF). Unlike existing neural network based optimization method that relies on estimated correspondences, our method directly optimizes over implicit volumes, eliminating the challenging step of matching pixels in indoor scenes. The key to our approach is to utilize the learning-based priors to guide the optimization process of NeRF. Our system firstly adapts a monocular depth network over the target scene by finetuning on its sparse SfM+MVS reconstruction from COLMAP. Then, we show that the shape-radiance ambiguity of NeRF still exists in indoor environments and propose to address the issue by employing the adapted depth priors to monitor the sampling process of volume rendering. Finally, a per-pixel confidence map acquired by error computation on the rendered image can be used to further improve the depth quality. Experiments show that our proposed framework significantly outperforms state-of-the-art methods on indoor scenes, with surprising findings presented on the effectiveness of correspondence-based optimization and NeRF-based optimization over the adapted depth priors. In addition, we show that the guided optimization scheme does not sacrifice the original synthesis capability of neural radiance fields, improving the rendering quality on both seen and novel views. Code is available at this https URL.Dense Depth Priors for Neural Radiance Fields from Sparse Input Views
Barbara Roessle, Jonathan T. Barron, Ben Mildenhall, Pratul P. Srinivasan, Matthias Nießner
CVPR 2022, 6 Dec 2021
[arXiv] [Project] [Github]
RobustNeRF: Ignoring Distractors with Robust Losses
Sara Sabour, Suhani Vora, Daniel Duckworth, Ivan Krasin, David J. Fleet, Andrea Tagliasacchi
arXiv preprint, 2 Feb 2023
[arXiv] [Project]
Digging into Depth Priors for Outdoor Neural Radiance Fields
Chen Wang, Jiadai Sun, Lina Liu, Chenming Wu, Zhelun Shen, Dayan Wu, Yuchao Dai, Liangjun Zhang
ACMMM 2023, 8 Aug 2023
[arXiv] [Project]
AltNeRF: Learning Robust Neural Radiance Field via Alternating Depth-Pose Optimization
Kun Wang, Zhiqiang Yan, Huang Tian, Zhenyu Zhang, Xiang Li, Jun Li, Jian Yang
AAAI 2024, 19 Aug 2023
[arXiv]
Depth Supervised Neural Surface Reconstruction from Airborne Imagery
Vincent Hackstein, Paul Fauth-Mayer, Matthias Rothermel, Norbert Haala
arXiv preprint, 25 Apr 2024
[arXiv]
🔥Implicit Neural Representations with Periodic Activation Functions
Vincent Sitzmann, Julien N. P. Martel, Alexander W. Bergman, David B. Lindell, Gordon Wetzstein
NeurIPS 2020, 17 Jun 2020
Abstract
Implicitly defined, continuous, differentiable signal representations parameterized by neural networks have emerged as a powerful paradigm, offering many possible benefits over conventional representations. However, current network architectures for such implicit neural representations are incapable of modeling signals with fine detail, and fail to represent a signal's spatial and temporal derivatives, despite the fact that these are essential to many physical signals defined implicitly as the solution to partial differential equations. We propose to leverage periodic activation functions for implicit neural representations and demonstrate that these networks, dubbed sinusoidal representation networks or Sirens, are ideally suited for representing complex natural signals and their derivatives. We analyze Siren activation statistics to propose a principled initialization scheme and demonstrate the representation of images, wavefields, video, sound, and their derivatives. Further, we show how Sirens can be leveraged to solve challenging boundary value problems, such as particular Eikonal equations (yielding signed distance functions), the Poisson equation, and the Helmholtz and wave equations. Lastly, we combine Sirens with hypernetworks to learn priors over the space of Siren functions.[arXiv]
Multiplicative Filter Networks
Rizal Fathony, Anit Kumar Sahu, Devin Willmott, J Zico Kolter
ICLR 2021, 13 Jan 2021
[OpenReview]
PREF: Phasorial Embedding Fields for Compact Neural Representations
Binbin Huang, Xinhao Yan, Anpei Chen, Shenghua Gao, Jingyi Yu
arXiv preprint, 26 May 2022
[avXiv]
Fourier Features Let Networks Learn High Frequency Functions in Low Dimensional Domains
Matthew Tancik, Pratul P. Srinivasan, Ben Mildenhall, Sara Fridovich-Keil, Nithin Raghavan, Utkarsh Singhal, Ravi Ramamoorthi, Jonathan T. Barron, Ren Ng
arXiv preprint, 18 Jun 2020
[arXiv]
BACON: Band-limited Coordinate Networks for Multiscale Scene Representation
David B. Lindell, Dave Van Veen, Jeong Joon Park, Gordon Wetzstein
CVPR 2022, 9 Dec 2021
[arXiv] [Project]
Improved Implicity Neural Representation with Fourier Bases Reparameterized Training
Kexuan Shi, Xingyu Zhou, Shuhang Gu
arXiv preprint, 15 Jan 2024
[arXiv]
🔥Nerfies: Deformable Neural Radiance Fields
Keunhong Park, Utkarsh Sinha, Jonathan T. Barron, Sofien Bouaziz, Dan B Goldman, Steven M. Seitz, Ricardo Martin-Brualla
ICCV 2021, 25 Nov 2020
Abstract
We present the first method capable of photorealistically reconstructing deformable scenes using photos/videos captured casually from mobile phones. Our approach augments neural radiance fields (NeRF) by optimizing an additional continuous volumetric deformation field that warps each observed point into a canonical 5D NeRF. We observe that these NeRF-like deformation fields are prone to local minima, and propose a coarse-to-fine optimization method for coordinate-based models that allows for more robust optimization. By adapting principles from geometry processing and physical simulation to NeRF-like models, we propose an elastic regularization of the deformation field that further improves robustness. We show that our method can turn casually captured selfie photos/videos into deformable NeRF models that allow for photorealistic renderings of the subject from arbitrary viewpoints, which we dub "nerfies." We evaluate our method by collecting time-synchronized data using a rig with two mobile phones, yielding train/validation images of the same pose at different viewpoints. We show that our method faithfully reconstructs non-rigidly deforming scenes and reproduces unseen views with high fidelity.Neural Scene Flow Fields for Space-Time View Synthesis of Dynamic Scenes
Zhengqi Li, Simon Niklaus, Noah Snavely, Oliver Wang
CVPR 2021, 26 Nov 2020
[arXiv] [Project] [Github]
HyperNeRF: A Higher-Dimensional Representation for Topologically Varying Neural Radiance Fields
Keunhong Park, Utkarsh Sinha, Peter Hedman, Jonathan T. Barron, Sofien Bouaziz, Dan B Goldman, Ricardo Martin-Brualla, Steven M. Seitz
CVPR 2021, 24 Jun 2021
[arXiv] [Project] [Github]
CodeNeRF: Disentangled Neural Radiance Fields for Object Categories
Wonbong Jang, Lourdes Agapito
ICCV 2021, 3 Sep 2021
[arXiv] [Project] [Github]
Panoptic Neural Fields: A Semantic Object-Aware Neural Scene Representation
Abhijit Kundu, Kyle Genova, Xiaoqi Yin, Alireza Fathi, Caroline Pantofaru, Leonidas Guibas, Andrea Tagliasacchi, Frank Dellaert, Thomas Funkhouser
CVPR 2022, 9 May 2022
[arXiv] [Project]
D2NeRF: Self-Supervised Decoupling of Dynamic and Static Objects from a Monocular Video
Tianhao Wu, Fangcheng Zhong, Andrea Tagliasacchi, Forrester Cole, Cengiz Oztireli
CVPR 2022, 31 May 2022
[arXiv] [Project] [Github]
🔥Deforming Radiance Fields with Cages
Tianhan Xu, Tatsuya Harada
ECCV 2022, 25 Jul 2022
Abstract
Recent advances in radiance fields enable photorealistic rendering of static or dynamic 3D scenes, but still do not support explicit deformation that is used for scene manipulation or animation. In this paper, we propose a method that enables a new type of deformation of the radiance field: free-form radiance field deformation. We use a triangular mesh that encloses the foreground object called cage as an interface, and by manipulating the cage vertices, our approach enables the free-form deformation of the radiance field. The core of our approach is cage-based deformation which is commonly used in mesh deformation. We propose a novel formulation to extend it to the radiance field, which maps the position and the view direction of the sampling points from the deformed space to the canonical space, thus enabling the rendering of the deformed scene. The deformation results of the synthetic datasets and the real-world datasets demonstrate the effectiveness of our approach.🔥NeRFPlayer: A Streamable Dynamic Scene Representation with Decomposed Neural Radiance Fields
Liangchen Song, Anpei Chen, Zhong Li, Zhang Chen, Lele Chen, Junsong Yuan, Yi Xu, Andreas Geiger
IEEE TVCG, 28 Oct 2022
Abstract
Visually exploring in a real-world 4D spatiotemporal space freely in VR has been a long-term quest. The task is especially appealing when only a few or even single RGB cameras are used for capturing the dynamic scene. To this end, we present an efficient framework capable of fast reconstruction, compact modeling, and streamable rendering. First, we propose to decompose the 4D spatiotemporal space according to temporal characteristics. Points in the 4D space are associated with probabilities of belonging to three categories: static, deforming, and new areas. Each area is represented and regularized by a separate neural field. Second, we propose a hybrid representations based feature streaming scheme for efficiently modeling the neural fields. Our approach, coined NeRFPlayer, is evaluated on dynamic scenes captured by single hand-held cameras and multi-camera arrays, achieving comparable or superior rendering performance in terms of quality and speed comparable to recent state-of-the-art methods, achieving reconstruction in 10 seconds per frame and interactive rendering.[arXiv] [Project] [Github(in nerfstudio)]
Robust Dynamic Radiance Fields
Yu-Lun Liu, Chen Gao, Andreas Meuleman, Hung-Yu Tseng, Ayush Saraf, Changil Kim, Yung-Yu Chuang, Johannes Kopf, Jia-Bin Huang
CVPR 2023, 5 Jan 2023
[arXiv] [Project]
🔥HyperReel: High-Fidelity 6-DoF Video with Ray-Conditioned Sampling
Benjamin Attal, Jia-Bin Huang, Christian Richardt, Michael Zollhoefer, Johannes Kopf, Matthew O'Toole, Changil Kim
arXiv preprint, 5 Jan 2023
Abstract
Volumetric scene representations enable photorealistic view synthesis for static scenes and form the basis of several existing 6-DoF video techniques. However, the volume rendering procedures that drive these representations necessitate careful trade-offs in terms of quality, rendering speed, and memory efficiency. In particular, existing methods fail to simultaneously achieve real-time performance, small memory footprint, and high-quality rendering for challenging real-world scenes. To address these issues, we present HyperReel -- a novel 6-DoF video representation. The two core components of HyperReel are: (1) a ray-conditioned sample prediction network that enables high-fidelity, high frame rate rendering at high resolutions and (2) a compact and memory-efficient dynamic volume representation. Our 6-DoF video pipeline achieves the best performance compared to prior and contemporary approaches in terms of visual quality with small memory requirements, while also rendering at up to 18 frames-per-second at megapixel resolution without any custom CUDA code.CageNeRF: Cage-based Neural Radiance Field for Generalized 3D Deformation and Animation
Yicong Peng, Yichao Yan, Shengqi Liu, Yuhao Cheng, Shanyan Guan, Bowen Pan, Guangtao Zhai, Xiaokang Yang
NeurIPS 2022, 01 Nov 2022
[OpenReview] [Zhihu] [Github]
DynIBaR: Neural Dynamic Image-Based Rendering
Zhengqi Li, Qianqian Wang, Forrester Cole, Richard Tucker, Noah Snavely
CVPR 2023, 20 Nov 2022
[arXiv] [Project] [Github] [Video]
MonoNeRF: Learning a Generalizable Dynamic Radiance Field from Monocular Videos
Fengrui Tian, Shaoyi Du, Yueqi Duan
ICCV 2023, 26 Dec 2022
[arXiv] [Github]
Robust Dynamic Radiance Fields
Yu-Lun Liu, Chen Gao, Andreas Meuleman, Hung-Yu Tseng, Ayush Saraf, Changil Kim, Yung-Yu Chuang, Johannes Kopf, Jia-Bin Huang
CVPR 2023, 5 Jan 2023
[arXiv] [Project] [Video]
🔥HexPlane: A Fast Representation for Dynamic Scenes
Ang Cao, Justin Johnson
CVPR 2023, 23 Jan 2023
Abstract
Modeling and re-rendering dynamic 3D scenes is a challenging task in 3D vision. Prior approaches build on NeRF and rely on implicit representations. This is slow since it requires many MLP evaluations, constraining real-world applications. We show that dynamic 3D scenes can be explicitly represented by six planes of learned features, leading to an elegant solution we call HexPlane. A HexPlane computes features for points in spacetime by fusing vectors extracted from each plane, which is highly efficient. Pairing a HexPlane with a tiny MLP to regress output colors and training via volume rendering gives impressive results for novel view synthesis on dynamic scenes, matching the image quality of prior work but reducing training time by more than 100×. Extensive ablations confirm our HexPlane design and show that it is robust to different feature fusion mechanisms, coordinate systems, and decoding mechanisms. HexPlane is a simple and effective solution for representing 4D volumes, and we hope they can broadly contribute to modeling spacetime for dynamic 3D scenes.🔥K-Planes: Explicit Radiance Fields in Space, Time, and Appearance
Sara Fridovich-Keil, Giacomo Meanti, Frederik Warburg, Benjamin Recht, Angjoo Kanazawa
CVPR 2023, 24 Jan 2023
Abstract
We introduce k-planes, a white-box model for radiance fields in arbitrary dimensions. Our model uses d choose 2 planes to represent a d-dimensional scene, providing a seamless way to go from static (d=3) to dynamic (d=4) scenes. This planar factorization makes adding dimension-specific priors easy, e.g. temporal smoothness and multi-resolution spatial structure, and induces a natural decomposition of static and dynamic components of a scene. We use a linear feature decoder with a learned color basis that yields similar performance as a nonlinear black-box MLP decoder. Across a range of synthetic and real, static and dynamic, fixed and varying appearance scenes, k-planes yields competitive and often state-of-the-art reconstruction fidelity with low memory usage, achieving 1000x compression over a full 4D grid, and fast optimization with a pure PyTorch implementation. For video results and code, please see this https URL.[arXiv] [Project] [Github] [Notes]
PAC-NeRF: Physics Augmented Continuum Neural Radiance Fields for Geometry-Agnostic System Identification
Xuan Li, Yi-Ling Qiao, Peter Yichen Chen, Krishna Murthy Jatavallabhula, Ming Lin, Chenfanfu Jiang, Chuang Gan
ICLR 2023, 02 Feb 2023
[OpenReview] [Project] [Github]
Temporal Interpolation Is All You Need for Dynamic Neural Radiance Fields
Sungheon Park, Minjung Son, Seokhwan Jang, Young Chun Ahn, Ji-Yeon Kim, Nahyup Kang
CVPR 2023, 18 Feb 2023
[arXiv] [Project] [Video]
OD-NeRF: Efficient Training of On-the-Fly Dynamic Neural Radiance Fields
Zhiwen Yan, Chen Li, Gim Hee Lee
arXiv preprint, 24 May 2023
[arXiv]
SceNeRFlow: Time-Consistent Reconstruction of General Dynamic Scenes
Authors: Edith Tretschk, Vladislav Golyanik, Michael Zollhoefer, Aljaz Bozic, Christoph Lassner, Christian Theobalt
arXiv preprint, 16 August, 2023
[arXiv] [Project] [Video]
ResFields: Residual Neural Fields for Spatiotemporal Signals
Marko Mihajlovic, Sergey Prokudin, Marc Pollefeys, Siyu Tang
arXiv preprint, 6 Sep 2023
[arXiv] [Project] [Github]
Dynamic Mesh-Aware Radiance Fields
Yi-Ling Qiao, Alexander Gao, Yiran Xu, Yue Feng, Jia-Bin Huang, Ming C. Lin
ICCV 2023, 8 Sep 2023
[arXiv] [Project] [Github]
DynaMoN: Motion-Aware Fast And Robust Camera Localization for Dynamic NeRF
Mert Asim Karaoglu, Hannah Schieber, Nicolas Schischka, Melih Görgülü, Florian Grötzner, Alexander Ladikos, Daniel Roth, Nassir Navab, Benjamin Busam
arXiv preprint, 16 Sep 2023
[arXiv]
Point-DynRF: Point-based Dynamic Radiance Fields from a Monocular Video
Byeongjun Park, Changick Kim
WACV 2024, 14 Oct 2023
[arXiv]
Sync-NeRF: Generalizing Dynamic NeRFs to Unsynchronized Videos
Seoha Kim, Jeongmin Bae, Youngsik Yun, Hahyun Lee, Gun Bang, Youngjung Uh
arXiv preprint, 20 Oct 2023
[arXiv] [Project]
DreaMo: Articulated 3D Reconstruction From A Single Casual Video
Tao Tu, Ming-Feng Li, Chieh Hubert Lin, Yen-Chi Cheng, Min Sun, Ming-Hsuan Yang
arXiv preprint, 5 Dec, 2023
[arXiv] [Project]
NeVRF: Neural Video-based Radiance Fields for Long-duration Sequences
Minye Wu, Tinne Tuytelaars
arXiv preprint, 10 Dec 2023
[arXiv]
Fast High Dynamic Range Radiance Fields for Dynamic Scenes
Guanjun Wu, Taoran Yi, Jiemin Fang, Wenyu Liu, Xinggang Wang
3DV 2024, 11 Jan 2024
[arXiv] [Project] [Code]
DaReNeRF: Direction-aware Representation for Dynamic Scenes
Ange Lou, Benjamin Planche, Zhongpai Gao, Yamin Li, Tianyu Luan, Hao Ding, Terrence Chen, Jack Noble, Ziyan Wu
CVPR 2024, 4 Mar 2024
[arXiv]
S-DyRF: Reference-Based Stylized Radiance Fields for Dynamic Scenes
Xingyi Li, Zhiguo Cao, Yizheng Wu, Kewei Wang, Ke Xian, Zhe Wang, Guosheng Lin
CVPR 2024, 10 Mar 2024
[arXiv] [Project] [Code]
NeRF-HuGS: Improved Neural Radiance Fields in Non-static Scenes Using Heuristics-Guided Segmentation
Jiahao Chen, Yipeng Qin, Lingjie Liu, Jiangbo Lu, Guanbin Li
CVPR 2024, 26 Mar 2024
[arXiv] [Project] [Code]
LiDAR4D: Dynamic Neural Fields for Novel Space-time View LiDAR Synthesis
Zehan Zheng, Fan Lu, Weiyi Xue, Guang Chen, Changjun Jiang
CVPR 2024, 3 Apr 2024
[arXiv] [Project] [Code]
TK-Planes: Tiered K-Planes with High Dimensional Feature Vectors for Dynamic UAV-based Scenes
Christopher Maxey, Jaehoon Choi, Yonghan Lee, Hyungtae Lee, Dinesh Manocha, Heesung Kwon
IROS2024, 4 May 2024
[arXiv]
JointRF: End-to-End Joint Optimization for Dynamic Neural Radiance Field Representation and Compression
Zihan Zheng, Houqiang Zhong, Qiang Hu, Xiaoyun Zhang, Li Song, Ya Zhang, Yanfeng Wang
arXiv preprint, 23 May 2024
[arXiv]
Improving Physics-Augmented Continuum Neural Radiance Field-Based Geometry-Agnostic System Identification with Lagrangian Particle Optimization
Takuhiro Kaneko
CVPR 2024, 6 Jun 2024
[arXiv] [Project] [Video]
TutteNet: Injective 3D Deformations by Composition of 2D Mesh Deformations
Bo Sun, Thibault Groueix, Chen Song, Qixing Huang, Noam Aigerman
arXiv preprint, 17 Jun 2024
[arXiv]
Dynamic Neural Radiance Field From Defocused Monocular Video
Xianrui Luo, Huiqiang Sun, Juewen Peng, Zhiguo Cao
ECCV 2024, 8 Jul 2024
[arXiv]
KFD-NeRF: Rethinking Dynamic NeRF with Kalman Filter
Yifan Zhan, Zhuoxiao Li, Muyao Niu, Zhihang Zhong, Shohei Nobuhara, Ko Nishino, Yinqiang Zheng
ECCV 2024, 18 Jul 2024
[arXiv]
TFS-NeRF: Template-Free NeRF for Semantic 3D Reconstruction of Dynamic Scene
Sandika Biswas, Qianyi Wu, Biplab Banerjee, Hamid Rezatofighi
NeurIPS 2024, 26 Sep 2024
[arXiv]
Deformable NeRF using Recursively Subdivided Tetrahedra
Zherui Qiu, Chenqu Ren, Kaiwen Song, Xiaoyi Zeng, Leyuan Yang, Juyong Zhang
ACM MM 2024, 6 Oct 2024
[arXiv] [Project]
🔥Neural Sparse Voxel Fields
Lingjie Liu, Jiatao Gu, Kyaw Zaw Lin, Tat-Seng Chua, Christian Theobalt
NeurIPS 2020, 22 Jul 2020
Abstract
Photo-realistic free-viewpoint rendering of real-world scenes using classical computer graphics techniques is challenging, because it requires the difficult step of capturing detailed appearance and geometry models. Recent studies have demonstrated promising results by learning scene representations that implicitly encode both geometry and appearance without 3D supervision. However, existing approaches in practice often show blurry renderings caused by the limited network capacity or the difficulty in finding accurate intersections of camera rays with the scene geometry. Synthesizing high-resolution imagery from these representations often requires time-consuming optical ray marching. In this work, we introduce Neural Sparse Voxel Fields (NSVF), a new neural scene representation for fast and high-quality free-viewpoint rendering. NSVF defines a set of voxel-bounded implicit fields organized in a sparse voxel octree to model local properties in each cell. We progressively learn the underlying voxel structures with a differentiable ray-marching operation from only a set of posed RGB images. With the sparse voxel octree structure, rendering novel views can be accelerated by skipping the voxels containing no relevant scene content. Our method is typically over 10 times faster than the state-of-the-art (namely, NeRF(Mildenhall et al., 2020)) at inference time while achieving higher quality results. Furthermore, by utilizing an explicit sparse voxel representation, our method can easily be applied to scene editing and scene composition. We also demonstrate several challenging tasks, including multi-scene learning, free-viewpoint rendering of a moving human, and large-scale scene rendering. Code and data are available at our website: this https URL.[arXiv]
🔥NeRF++: Analyzing and Improving Neural Radiance Fields
Kai Zhang, Gernot Riegler, Noah Snavely, Vladlen Koltun
arXiv Preprint 2020, 15 Oct 2020
Abstract
Neural Radiance Fields (NeRF) achieve impressive view synthesis results for a variety of capture settings, including 360 capture of bounded scenes and forward-facing capture of bounded and unbounded scenes. NeRF fits multi-layer perceptrons (MLPs) representing view-invariant opacity and view-dependent color volumes to a set of training images, and samples novel views based on volume rendering techniques. In this technical report, we first remark on radiance fields and their potential ambiguities, namely the shape-radiance ambiguity, and analyze NeRF's success in avoiding such ambiguities. Second, we address a parametrization issue involved in applying NeRF to 360 captures of objects within large-scale, unbounded 3D scenes. Our method improves view synthesis fidelity in this challenging scenario. Code is available at this https URL.DeRF: Decomposed Radiance Fields
Daniel Rebain, Wei Jiang, Soroosh Yazdani, Ke Li, Kwang Moo Yi, Andrea Tagliasacchi
CVPR 2021, 25 Nov 2020
[arXiv] [Project] [Github]
🔥AutoInt: Automatic Integration for Fast Neural Volume Rendering
David B. Lindell, Julien N. P. Martel, Gordon Wetzstein
CVPR 2021, 15 Oct 2020
Abstract
Numerical integration is a foundational technique in scientific computing and is at the core of many computer vision applications. Among these applications, neural volume rendering has recently been proposed as a new paradigm for view synthesis, achieving photorealistic image quality. However, a fundamental obstacle to making these methods practical is the extreme computational and memory requirements caused by the required volume integrations along the rendered rays during training and inference. Millions of rays, each requiring hundreds of forward passes through a neural network are needed to approximate those integrations with Monte Carlo sampling. Here, we propose automatic integration, a new framework for learning efficient, closed-form solutions to integrals using coordinate-based neural networks. For training, we instantiate the computational graph corresponding to the derivative of the network. The graph is fitted to the signal to integrate. After optimization, we reassemble the graph to obtain a network that represents the antiderivative. By the fundamental theorem of calculus, this enables the calculation of any definite integral in two evaluations of the network. Applying this approach to neural rendering, we improve a tradeoff between rendering speed and image quality: improving render times by greater than 10 times with a tradeoff of slightly reduced image quality.DONeRF: Towards Real-Time Rendering of Compact Neural Radiance Fields using Depth Oracle Networks
Thomas Neff, Pascal Stadlbauer, Mathias Parger, Andreas Kurz, Joerg H. Mueller, Chakravarty R. Alla Chaitanya, Anton Kaplanyan, Markus Steinberger
EGSR 2021, 4 Mar 2021
[arXiv] [Project] [Github]
FastNeRF: High-Fidelity Neural Rendering at 200FPS
Stephan J. Garbin, Marek Kowalski, Matthew Johnson, Jamie Shotton, Julien Valentin
ICCV 2021, 18 Mar 2021
[arXiv] [Project]
KiloNeRF: Speeding up Neural Radiance Fields with Thousands of Tiny MLPs
Christian Reiser, Songyou Peng, Yiyi Liao, Andreas Geiger
ICCV 2021, 25 Mar 2021
[arXiv] [Github]
🔥PlenOctrees for Real-time Rendering of Neural Radiance Fields
Alex Yu, Ruilong Li, Matthew Tancik, Hao Li, Ren Ng, Angjoo Kanazawa
ICCV 2021, 25 Mar 2021
Abstract
We introduce a method to render Neural Radiance Fields (NeRFs) in real time using PlenOctrees, an octree-based 3D representation which supports view-dependent effects. Our method can render 800x800 images at more than 150 FPS, which is over 3000 times faster than conventional NeRFs. We do so without sacrificing quality while preserving the ability of NeRFs to perform free-viewpoint rendering of scenes with arbitrary geometry and view-dependent effects. Real-time performance is achieved by pre-tabulating the NeRF into a PlenOctree. In order to preserve view-dependent effects such as specularities, we factorize the appearance via closed-form spherical basis functions. Specifically, we show that it is possible to train NeRFs to predict a spherical harmonic representation of radiance, removing the viewing direction as an input to the neural network. Furthermore, we show that PlenOctrees can be directly optimized to further minimize the reconstruction loss, which leads to equal or better quality compared to competing methods. Moreover, this octree optimization step can be used to reduce the training time, as we no longer need to wait for the NeRF training to converge fully. Our real-time neural rendering approach may potentially enable new applications such as 6-DOF industrial and product visualizations, as well as next generation AR/VR systems. PlenOctrees are amenable to in-browser rendering as well; please visit the project page for the interactive online demo, as well as video and code: this https URL[arXiv] [Project] [Github] [Viewer Github]
🔥Baking Neural Radiance Fields for Real-Time View Synthesis
Peter Hedman, Pratul P. Srinivasan, Ben Mildenhall, Jonathan T. Barron, Paul Debevec
ICCV 2021, 26 Mar 2021
Abstract
Neural volumetric representations such as Neural Radiance Fields (NeRF) have emerged as a compelling technique for learning to represent 3D scenes from images with the goal of rendering photorealistic images of the scene from unobserved viewpoints. However, NeRF's computational requirements are prohibitive for real-time applications: rendering views from a trained NeRF requires querying a multilayer perceptron (MLP) hundreds of times per ray. We present a method to train a NeRF, then precompute and store (i.e. "bake") it as a novel representation called a Sparse Neural Radiance Grid (SNeRG) that enables real-time rendering on commodity hardware. To achieve this, we introduce 1) a reformulation of NeRF's architecture, and 2) a sparse voxel grid representation with learned feature vectors. The resulting scene representation retains NeRF's ability to render fine geometric details and view-dependent appearance, is compact (averaging less than 90 MB per scene), and can be rendered in real-time (higher than 30 frames per second on a laptop GPU). Actual screen captures are shown in our video.Direct Voxel Grid Optimization: Super-fast Convergence for Radiance Fields Reconstruction
Cheng Sun, Min Sun, Hwann-Tzong Chen
CVPR 2022, 22 Nov 2021
[arXiv] [Project] [Github]
VaxNeRF: Revisiting the Classic for Voxel-Accelerated Neural Radiance Field
Naruya Kondo, Yuya Ikeda, Andrea Tagliasacchi, Yutaka Matsuo, Yoichi Ochiai, Shixiang Shane Gu
arXiv preprint, 25 Nov 2021
[arXiv] [Github]
NeuSample: Neural Sample Field for Efficient View Synthesis
Naruya Kondo, Yuya Ikeda, Andrea Tagliasacchi, Yutaka Matsuo, Yoichi Ochiai, Shixiang Shane Gu
arXiv preprint, 30 Nov 2021
[arXiv] [Project] [Github]
🔥Plenoxels: Radiance Fields without Neural Networks
Alex Yu, Sara Fridovich-Keil, Matthew Tancik, Qinhong Chen, Benjamin Recht, Angjoo Kanazawa
arXiv preprint, 30 Nov 2021
Abstract
We introduce Plenoxels (plenoptic voxels), a system for photorealistic view synthesis. Plenoxels represent a scene as a sparse 3D grid with spherical harmonics. This representation can be optimized from calibrated images via gradient methods and regularization without any neural components. On standard, benchmark tasks, Plenoxels are optimized two orders of magnitude faster than Neural Radiance Fields with no loss in visual quality.🔥Instant Neural Graphics Primitives with a Multiresolution Hash Encoding
Thomas Müller, Alex Evans, Christoph Schied, Alexander Keller
SIGGRAPH 2022, 16 Jan 2022
Abstract
Neural graphics primitives, parameterized by fully connected neural networks, can be costly to train and evaluate. We reduce this cost with a versatile new input encoding that permits the use of a smaller network without sacrificing quality, thus significantly reducing the number of floating point and memory access operations: a small neural network is augmented by a multiresolution hash table of trainable feature vectors whose values are optimized through stochastic gradient descent. The multiresolution structure allows the network to disambiguate hash collisions, making for a simple architecture that is trivial to parallelize on modern GPUs. We leverage this parallelism by implementing the whole system using fully-fused CUDA kernels with a focus on minimizing wasted bandwidth and compute operations. We achieve a combined speedup of several orders of magnitude, enabling training of high-quality neural graphics primitives in a matter of seconds, and rendering in tens of milliseconds at a resolution of 1920×1080.🔥TensoRF: Tensorial Radiance Fields
Anpei Chen, Zexiang Xu, Andreas Geiger, Jingyi Yu, Hao Su
ECCV 2022, 17 Mar 2022
Abstract
We present TensoRF, a novel approach to model and reconstruct radiance fields. Unlike NeRF that purely uses MLPs, we model the radiance field of a scene as a 4D tensor, which represents a 3D voxel grid with per-voxel multi-channel features. Our central idea is to factorize the 4D scene tensor into multiple compact low-rank tensor components. We demonstrate that applying traditional CP decomposition -- that factorizes tensors into rank-one components with compact vectors -- in our framework leads to improvements over vanilla NeRF. To further boost performance, we introduce a novel vector-matrix (VM) decomposition that relaxes the low-rank constraints for two modes of a tensor and factorizes tensors into compact vector and matrix factors. Beyond superior rendering quality, our models with CP and VM decompositions lead to a significantly lower memory footprint in comparison to previous and concurrent works that directly optimize per-voxel features. Experimentally, we demonstrate that TensoRF with CP decomposition achieves fast reconstruction (<30 min) with better rendering quality and even a smaller model size (<4 MB) compared to NeRF. Moreover, TensoRF with VM decomposition further boosts rendering quality and outperforms previous state-of-the-art methods, while reducing the reconstruction time (<10 min) and retaining a compact model size (<75 MB).[arXiv] [Project] [Github] [Notes]
SqueezeNeRF: Further factorized FastNeRF for memory-efficient inference
Krishna Wadhwani, Tamaki Kojima
arXiv preprint, 6 Apr 2022
[arXiv]
AdaNeRF: Adaptive Sampling for Real-Time Rendering of Neural Radiance Fields
Andreas Kurz, Thomas Neff, Zhaoyang Lv, Michael Zollhöfer, Markus Steinberger
ECCV 2022, 21 Jul 2022
[arXiv] [Project] [Github]
🔥MobileNeRF: Exploiting the Polygon Rasterization Pipeline for Efficient Neural Field Rendering on Mobile Architectures
Zhiqin Chen, Thomas Funkhouser, Peter Hedman, Andrea Tagliasacchi
CVPR 2023, 30 Jul 2022
Abstract
Neural Radiance Fields (NeRFs) have demonstrated amazing ability to synthesize images of 3D scenes from novel views. However, they rely upon specialized volumetric rendering algorithms based on ray marching that are mismatched to the capabilities of widely deployed graphics hardware. This paper introduces a new NeRF representation based on textured polygons that can synthesize novel images efficiently with standard rendering pipelines. The NeRF is represented as a set of polygons with textures representing binary opacities and feature vectors. Traditional rendering of the polygons with a z-buffer yields an image with features at every pixel, which are interpreted by a small, view-dependent MLP running in a fragment shader to produce a final pixel color. This approach enables NeRFs to be rendered with the traditional polygon rasterization pipeline, which provides massive pixel-level parallelism, achieving interactive frame rates on a wide range of compute platforms, including mobile phones.Real-Time Neural Light Field on Mobile Devices
Junli Cao, Huan Wang, Pavlo Chemerys, Vladislav Shakhrai, Ju Hu, Yun Fu, Denys Makoviichuk, Sergey Tulyakov, Jian Ren
CVPR 2023, 15 Dec 2022
[arXiv] [Project] [Github]
Factor Fields: A Unified Framework for Neural Fields and Beyond
Anpei Chen, Zexiang Xu, Xinyue Wei, Siyu Tang, Hao Su, Andreas Geiger
arXiv preprint, 2 Feb 2023
[arXiv] [Project] [Github]
🔥MERF: Memory-Efficient Radiance Fields for Real-time View Synthesis in Unbounded Scenes
Christian Reiser, Richard Szeliski, Dor Verbin, Pratul P. Srinivasan, Ben Mildenhall, Andreas Geiger, Jonathan T. Barron, Peter Hedman
arXiv preprint, 23 Feb 2023
Abstract
Neural radiance fields enable state-of-the-art photorealistic view synthesis. However, existing radiance field representations are either too compute-intensive for real-time rendering or require too much memory to scale to large scenes. We present a Memory-Efficient Radiance Field (MERF) representation that achieves real-time rendering of large-scale scenes in a browser. MERF reduces the memory consumption of prior sparse volumetric radiance fields using a combination of a sparse feature grid and high-resolution 2D feature planes. To support large-scale unbounded scenes, we introduce a novel contraction function that maps scene coordinates into a bounded volume while still allowing for efficient ray-box intersection. We design a lossless procedure for baking the parameterization used during training into a model that achieves real-time rendering while still preserving the photorealistic view synthesis quality of a volumetric radiance field.🔥BakedSDF: Meshing Neural SDFs for Real-Time View Synthesis
Lior Yariv, Peter Hedman, Christian Reiser, Dor Verbin, Pratul P. Srinivasan, Richard Szeliski, Jonathan T. Barron, Ben Mildenhall
arXiv preprint, 28 Feb 2023
Abstract
We present a method for reconstructing high-quality meshes of large unbounded real-world scenes suitable for photorealistic novel view synthesis. We first optimize a hybrid neural volume-surface scene representation designed to have well-behaved level sets that correspond to surfaces in the scene. We then bake this representation into a high-quality triangle mesh, which we equip with a simple and fast view-dependent appearance model based on spherical Gaussians. Finally, we optimize this baked representation to best reproduce the captured viewpoints, resulting in a model that can leverage accelerated polygon rasterization pipelines for real-time view synthesis on commodity hardware. Our approach outperforms previous scene representations for real-time rendering in terms of accuracy, speed, and power consumption, and produces high quality meshes that enable applications such as appearance editing and physical simulation.Volume Feature Rendering for Fast Neural Radiance Field Reconstruction
Kang Han, Wei Xiang, Lu Yu
arXiv preprint, 29 May 2023
[arXiv]
Compact Real-time Radiance Fields with Neural Codebook
Lingzhi Li, Zhongshu Wang, Zhen Shen, Li Shen, Ping Tan
ICME 2023, 29 May 2023
[arXiv]
Dictionary Fields: Learning a Neural Basis Decomposition
Anpei Chen, Zexiang Xu, Xinyue Wei, Siyu Tang, Hao Su, Andreas Geiger
SIGGRAPH 2023
[Paper] [Github]
NAS-NeRF: Generative Neural Architecture Search for Neural Radiance Fields
NAS-NeRF: Generative Neural Architecture Search for Neural Radiance Fields
arXiv preprint, 25 Sep 2023
[arXiv] [Project]
Adaptive Multi-NeRF: Exploit Efficient Parallelism in Adaptive Multiple Scale Neural Radiance Field Rendering
Tong Wang, Shuichi Kurabayashi
arXiv preprint, 3 Oct 2023
[arXiv]
MIMO-NeRF: Fast Neural Rendering with Multi-input Multi-output Neural Radiance Fields
Takuhiro Kaneko
ICCV 2023, 3 Oct 2023
[arXiv] [Project]
Neural Processing of Tri-Plane Hybrid Neural Fields
Adriano Cardace, Pierluigi Zama Ramirez, Francesco Ballerini, Allan Zhou, Samuele Salti, Luigi Di Stefano
arXiv preprint, 2 Oct 2023
[arXiv]
CAwa-NeRF: Instant Learning of Compression-Aware NeRF Features
Omnia Mahmoud, Théo Ladune, Matthieu Gendrin
arXiv preprint, 23 Oct 2023
[arXiv]
Efficient Encoding of Graphics Primitives with Simplex-based Structures
Yibo Wen, Yunfan Yang
arXiv preprint, 26 Nov 2023
[arXiv]
ProNeRF: Learning Efficient Projection-Aware Ray Sampling for Fine-Grained Implicit Neural Radiance Fields
Juan Luis Gonzalez Bello, Minh-Quan Viet Bui, Munchurl Kim
arXiv preprint, 13 Dec 2023
[arXiv] [Project] [Code]
HyperPlanes: Hypernetwork Approach to Rapid NeRF Adaptation
Paweł Batorski, Dawid Malarz, Marcin Przewięźlikowski, Marcin Mazur, Sławomir Tadeja, Przemysław Spurek
arXiv preprint, 2 Feb 2024
[arXiv]
Preconditioners for the Stochastic Training of Implicit Neural Representations
Shin-Fang Chng, Hemanth Saratchandran, Simon Lucey
arXiv preprint, 13 Feb 2024
[arXiv]
Improved Generalization of Weight Space Networks via Augmentations
Aviv Shamsian, Aviv Navon, David W. Zhang, Yan Zhang, Ethan Fetaya, Gal Chechik, Haggai Maron
arXiv preprint, 6 Feb 2024
[arXiv]
Vosh: Voxel-Mesh Hybrid Representation for Real-Time View Synthesis
Chenhao Zhang, Yongyang Zhou, Lei Zhang
arXiv preprint, 11 Mar 2024
[arXiv]
Plug-and-Play Acceleration of Occupancy Grid-based NeRF Rendering using VDB Grid and Hierarchical Ray Traversal
Yoshio Kato, Shuhei Tarashima
CVPR Neural Rendering Intelligence Workshop 2024, 16 Apr 2024
[arXiv] [Code]
Cicero: Addressing Algorithmic and Architectural Bottlenecks in Neural Rendering by Radiance Warping and Memory Optimizations
Yu Feng, Zihan Liu, Jingwen Leng, Minyi Guo, Yuhao Zhu
arXiv preprint, 18 Apr 2024
[arXiv]
🔥NeRF-XL: Scaling NeRFs with Multiple GPUs
Ruilong Li, Sanja Fidler, Angjoo Kanazawa, Francis Williams
arXiv preprint, 24 Apr 2024
Abstract
We present NeRF-XL, a principled method for distributing Neural Radiance Fields (NeRFs) across multiple GPUs, thus enabling the training and rendering of NeRFs with an arbitrarily large capacity. We begin by revisiting existing multi-GPU approaches, which decompose large scenes into multiple independently trained NeRFs, and identify several fundamental issues with these methods that hinder improvements in reconstruction quality as additional computational resources (GPUs) are used in training. NeRF-XL remedies these issues and enables the training and rendering of NeRFs with an arbitrary number of parameters by simply using more hardware. At the core of our method lies a novel distributed training and rendering formulation, which is mathematically equivalent to the classic single-GPU case and minimizes communication between GPUs. By unlocking NeRFs with arbitrarily large parameter counts, our approach is the first to reveal multi-GPU scaling laws for NeRFs, showing improvements in reconstruction quality with larger parameter counts and speed improvements with more GPUs. We demonstrate the effectiveness of NeRF-XL on a wide variety of datasets, including the largest open-source dataset to date, MatrixCity, containing 258K images covering a 25km^2 city area.Towards Real-Time Neural Volumetric Rendering on Mobile Devices: A Measurement Study
Zhe Wang, Yifei Zhu
ACM SIGCOMM Workshop on Emerging Multimedia Systems 2024, 23 Jun 2024
[arXiv]
NGP-RT: Fusing Multi-Level Hash Features with Lightweight Attention for Real-Time Novel View Synthesis
Yubin Hu, Xiaoyang Guo, Yang Xiao, Jingwei Huang, Yong-Jin Liu
ECCV 2024, 15 Jul 2024
[arXiv]
Boost Your NeRF: A Model-Agnostic Mixture of Experts Framework for High Quality and Efficient Rendering
Francesco Di Sario, Riccardo Renzulli, Enzo Tartaglione, Marco Grangetto
arXiv preprint, 15 Jul 2024
[arXiv]
Potamoi: Accelerating Neural Rendering via a Unified Streaming Architecture
Yu Feng, Weikai Lin, Zihan Liu, Jingwen Leng, Minyi Guo, Han Zhao, Xiaofeng Hou, Jieru Zhao, Yuhao Zhu
arXiv preprint, 13 Aug 2024
[arXiv]
Expansive Supervision for Neural Radiance Field
Weixiang Zhang, Shuzhao Xie, Shijia Ge, Wei Yao, Chen Tang, Zhi Wang
arXiv preprint, 12 Sep 2024
[arXiv]
🔥pixelNeRF: Neural Radiance Fields from One or Few Images
Alex Yu, Vickie Ye, Matthew Tancik, Angjoo Kanazawa
CVPR 2021, 3 Dec 2020
Abstract
We propose pixelNeRF, a learning framework that predicts a continuous neural scene representation conditioned on one or few input images. The existing approach for constructing neural radiance fields involves optimizing the representation to every scene independently, requiring many calibrated views and significant compute time. We take a step towards resolving these shortcomings by introducing an architecture that conditions a NeRF on image inputs in a fully convolutional manner. This allows the network to be trained across multiple scenes to learn a scene prior, enabling it to perform novel view synthesis in a feed-forward manner from a sparse set of views (as few as one). Leveraging the volume rendering approach of NeRF, our model can be trained directly from images with no explicit 3D supervision. We conduct extensive experiments on ShapeNet benchmarks for single image novel view synthesis tasks with held-out objects as well as entire unseen categories. We further demonstrate the flexibility of pixelNeRF by demonstrating it on multi-object ShapeNet scenes and real scenes from the DTU dataset. In all cases, pixelNeRF outperforms current state-of-the-art baselines for novel view synthesis and single image 3D reconstruction. For the video and code, please visit the project website: this https URLIBRNet: Learning Multi-View Image-Based Rendering
Qianqian Wang, Zhicheng Wang, Kyle Genova, Pratul Srinivasan, Howard Zhou, Jonathan T. Barron, Ricardo Martin-Brualla, Noah Snavely, Thomas Funkhouser
CVPR 2021, 25 Feb 2021
[arXiv] [Project] [Github]
🔥Putting NeRF on a Diet: Semantically Consistent Few-Shot View Synthesis
Ajay Jain, Matthew Tancik, Pieter Abbeel
arXiv preprint, 1 Apr 2021
Abstract
We present DietNeRF, a 3D neural scene representation estimated from a few images. Neural Radiance Fields (NeRF) learn a continuous volumetric representation of a scene through multi-view consistency, and can be rendered from novel viewpoints by ray casting. While NeRF has an impressive ability to reconstruct geometry and fine details given many images, up to 100 for challenging 360° scenes, it often finds a degenerate solution to its image reconstruction objective when only a few input views are available. To improve few-shot quality, we propose DietNeRF. We introduce an auxiliary semantic consistency loss that encourages realistic renderings at novel poses. DietNeRF is trained on individual scenes to (1) correctly render given input views from the same pose, and (2) match high-level semantic attributes across different, random poses. Our semantic loss allows us to supervise DietNeRF from arbitrary poses. We extract these semantics using a pre-trained visual encoder such as CLIP, a Vision Transformer trained on hundreds of millions of diverse single-view, 2D photographs mined from the web with natural language supervision. In experiments, DietNeRF improves the perceptual quality of few-shot view synthesis when learned from scratch, can render novel views with as few as one observed image when pre-trained on a multi-view dataset, and produces plausible completions of completely unobserved regions.MINE: Towards Continuous Depth MPI with NeRF for Novel View Synthesis
Jiaxin Li, Zijian Feng, Qi She, Henghui Ding, Changhu Wang, Gim Hee Lee
ICCV 2021, 27 Mar 2021
[arXiv] [Project] [Github]
🔥MVSNeRF: Fast Generalizable Radiance Field Reconstruction from Multi-View Stereo
Anpei Chen, Zexiang Xu, Fuqiang Zhao, Xiaoshuai Zhang, Fanbo Xiang, Jingyi Yu, Hao Su
ICCV 2021, 29 Mar 2021
Abstract
We present MVSNeRF, a novel neural rendering approach that can efficiently reconstruct neural radiance fields for view synthesis. Unlike prior works on neural radiance fields that consider per-scene optimization on densely captured images, we propose a generic deep neural network that can reconstruct radiance fields from only three nearby input views via fast network inference. Our approach leverages plane-swept cost volumes (widely used in multi-view stereo) for geometry-aware scene reasoning, and combines this with physically based volume rendering for neural radiance field reconstruction. We train our network on real objects in the DTU dataset, and test it on three different datasets to evaluate its effectiveness and generalizability. Our approach can generalize across scenes (even indoor scenes, completely different from our training scenes of objects) and generate realistic view synthesis results using only three input images, significantly outperforming concurrent works on generalizable radiance field reconstruction. Moreover, if dense images are captured, our estimated radiance field representation can be easily fine-tuned; this leads to fast per-scene reconstruction with higher rendering quality and substantially less optimization time than NeRF.Common Objects in 3D: Large-Scale Learning and Evaluation of Real-life 3D Category Reconstruction
Jeremy Reizenstein, Roman Shapovalov, Philipp Henzler, Luca Sbordone, Patrick Labatut, David Novotny
ICCV 2021, 29 Mar 2021
[arXiv] [Co3D Dataset]
RegNeRF: Regularizing Neural Radiance Fields for View Synthesis from Sparse Inputs
Michael Niemeyer, Jonathan T. Barron, Ben Mildenhall, Mehdi S. M. Sajjadi, Andreas Geiger, Noha Radwan
CVPR 2022, 1 Dec 2021
[arXiv] [Project] [Code] [Notes]
🔥ViewFormer: NeRF-free Neural Rendering from Few Images Using Transformers
Jonáš Kulhánek, Erik Derner, Torsten Sattler, Robert Babuška
ECCV 2022, 18 Mar 2022
Abstract
Novel view synthesis is a long-standing problem. In this work, we consider a variant of the problem where we are given only a few context views sparsely covering a scene or an object. The goal is to predict novel viewpoints in the scene, which requires learning priors. The current state of the art is based on Neural Radiance Field (NeRF), and while achieving impressive results, the methods suffer from long training times as they require evaluating millions of 3D point samples via a neural network for each image. We propose a 2D-only method that maps multiple context views and a query pose to a new image in a single pass of a neural network. Our model uses a two-stage architecture consisting of a codebook and a transformer model. The codebook is used to embed individual images into a smaller latent space, and the transformer solves the view synthesis task in this more compact space. To train our model efficiently, we introduce a novel branching attention mechanism that allows us to use the same model not only for neural rendering but also for camera pose estimation. Experimental results on real-world scenes show that our approach is competitive compared to NeRF-based methods while not reasoning explicitly in 3D, and it is faster to train.S3-NeRF: Neural Reflectance Field from Shading and Shadow under a Single Viewpoint
Wenqi Yang, Guanying Chen, Chaofeng Chen, Zhenfang Chen, Kwan-Yee K. Wong
NeurIPS 2022, 17 Oct 2022
[arXiv] [Project] [Github]
NeuralLift-360: Lifting An In-the-wild 2D Photo to A 3D Object with 360° Views
Wenqi Yang, Guanying Chen, Chaofeng Chen, Zhenfang Chen, Kwan-Yee K. Wong
arXiv preprint, 29 Nov 2022
[arXiv] [Project] [Github]
SPARF: Large-Scale Learning of 3D Sparse Radiance Fields from Few Input Images
Abdullah Hamdi, Bernard Ghanem, Matthias Nießner
arXiv preprint, 18 Dec 2022
[arXiv] [Project] [Github]
Geometry-biased Transformers for Novel View Synthesis
Naveen Venkat, Mayank Agarwal, Maneesh Singh, Shubham Tulsiani
arXiv preprint, 11 Jan 2023
[arXiv] [Project] [Github]
Behind the Scenes: Density Fields for Single View Reconstruction
Felix Wimbauer, Nan Yang, Christian Rupprecht, Daniel Cremers
CVPR 2023, 18 Jan 2023
[arXiv] [Project] [Github] [Video]
NerfDiff: Single-image View Synthesis with NeRF-guided Distillation from 3D-aware Diffusion
Jiatao Gu, Alex Trevithick, Kai-En Lin, Josh Susskind, Christian Theobalt, Lingjie Liu, Ravi Ramamoorthi
arXiv preprint, 20 Feb 2023
[arXiv] [Project]
DiffusioNeRF: Regularizing Neural Radiance Fields with Denoising Diffusion Models
Jamie Wynn, Daniyar Turmukhambetov
arXiv preprint, 23 Feb 2023
[arXiv] [Github]
🔥FreeNeRF: Improving Few-shot Neural Rendering with Free Frequency Regularization
Jiawei Yang, Marco Pavone, Yue Wang
CVPR 2023, 13 Mar 2023
Abstract
Novel view synthesis with sparse inputs is a challenging problem for neural radiance fields (NeRF). Recent efforts alleviate this challenge by introducing external supervision, such as pre-trained models and extra depth signals, and by non-trivial patch-based rendering. In this paper, we present Frequency regularized NeRF (FreeNeRF), a surprisingly simple baseline that outperforms previous methods with minimal modifications to the plain NeRF. We analyze the key challenges in few-shot neural rendering and find that frequency plays an important role in NeRF's training. Based on the analysis, we propose two regularization terms. One is to regularize the frequency range of NeRF's inputs, while the other is to penalize the near-camera density fields. Both techniques are ``free lunches'' at no additional computational cost. We demonstrate that even with one line of code change, the original NeRF can achieve similar performance as other complicated methods in the few-shot setting. FreeNeRF achieves state-of-the-art performance across diverse datasets, including Blender, DTU, and LLFF. We hope this simple baseline will motivate a rethinking of the fundamental role of frequency in NeRF's training under the low-data regime and beyond.[arXiv] [Project] [Github] [Notes]
Zero-1-to-3: Zero-shot One Image to 3D Object
Jiawei Yang, Marco Pavone, Yue Wang
CVPR 2023, 20 Mar 2023
[arXiv] [Project] [Github]
SparseNeRF: Distilling Depth Ranking for Few-shot Novel View Synthesis
Guangcong Wang, Zhaoxi Chen, Chen Change Loy, Ziwei Liu
ICCV 2023, 28 Mar 2023
[arXiv] [Project] [Github]
VGOS: Voxel Grid Optimization for View Synthesis from Sparse Inputs
Jiakai Sun, Zhanjie Zhang, Jiafu Chen, Guangyuan Li, Boyan Ji, Lei Zhao, Wei Xing
IJCAI 2023, 26 Apr 2023
[arXiv] [Github]
ViP-NeRF: Visibility Prior for Sparse Input Neural Radiance Fields
Nagabhushan Somraj, Rajiv Soundararajan
SIGGRAPH 2023, 28 Apr 2023
[arXiv] [Project] [Github] [Video]
DäRF: Boosting Radiance Fields from Sparse Inputs with Monocular Depth Adaptation
Jiuhn Song, Seonghoon Park, Honggyu An, Seokju Cho, Min-Seop Kwak, Sungjin Cho, Seungryong Kim
arXiv preprint, 30 May 2023
[arXiv] [Project]
ZIGNeRF: Zero-shot 3D Scene Representation with Invertible Generative Neural Radiance Fields
Kanghyeok Ko, Minhyeok Lee
arXiv preprint, 5 Jun 2023
[arXiv]
Car-Studio: Learning Car Radiance Fields from Single-View and Endless In-the-wild Images
Tianyu Liu, Hao Zhao, Yang Yu, Guyue Zhou, Ming Liu
IEEE RA-L, 26 Jul, 2023
[arXiv] [Project]
Where and How: Mitigating Confusion in Neural Radiance Fields from Sparse Inputs
Yanqi Bao, Yuxin Li, Jing Huo, Tianyu Ding, Xinyue Liang, Wenbin Li, Yang Gao
ACMMM 2023, 5 Aug 2023
[arXiv] [Github]
Novel-view Synthesis and Pose Estimation for Hand-Object Interaction from Sparse Views
Wentian Qu, Zhaopeng Cui, Yinda Zhang, Chenyu Meng, Cuixia Ma, Xiaoming Deng, Hongan Wang
arXiv preprint, 22 Aug 2023
[arXiv] [Project]
PERF: Panoramic Neural Radiance Field from a Single Panorama
Guangcong Wang, Peng Wang, Zhaoxi Chen, Wenping Wang, Chen Change Loy, Ziwei Liu
arXir preprint, 25 Oct 2023
[arXiv] [Project] [Github]
ManifoldNeRF: View-dependent Image Feature Supervision for Few-shot Neural Radiance Fields
Daiju Kanaoka, Motoharu Sonogashira, Hakaru Tamukoh, Yasutomo Kawanishi
BMVC2023, 20 Oct 2023
[arXiv] [Github]
How Many Views Are Needed to Reconstruct an Unknown Object Using NeRF?
Sicong Pan, Liren Jin, Hao Hu, Marija Popović, Maren Bennewitz
ICRA 2024, 1 Oct 2023
[arXiv]
CaesarNeRF: Calibrated Semantic Representation for Few-shot Generalizable Neural Rendering
Haidong Zhu, Tianyu Ding, Tianyi Chen, Ilya Zharkov, Ram Nevatia, Luming Liang
arXiv preprint, 27 Nov 2023
[arXiv] [Project]
CorresNeRF: Image Correspondence Priors for Neural Radiance Fields
Yixing Lao, Xiaogang Xu, Zhipeng Cai, Xihui Liu, Hengshuang Zhao
NeurIPS 2023, 11 Dec, 2023
[arXiv] [Project] [Code]
Novel View Synthesis with View-Dependent Effects from a Single Image
Juan Luis Gonzalez Bello, Munchurl Kim
arXiv preprint, 13 Dec 2023
[arXiv] [Project]
ProvNeRF: Modeling per Point Provenance in NeRFs as a Stochastic Process
Kiyohiro Nakayama, Mikaela Angelina Uy, Yang You, Ke Li, Leonidas Guibas
arXiv preprint, 16 Jan 2024
[arXiv] [Project]
HG3-NeRF: Hierarchical Geometric, Semantic, and Photometric Guided Neural Radiance Fields for Sparse View Inputs
Zelin Gao, Weichen Dai, Yu Zhang
arXiv preprint, 22 Jan 2024
[arXiv]
FrameNeRF: A Simple and Efficient Framework for Few-shot Novel View Synthesis
Yan Xing, Pan Wang, Ligang Liu, Daolun Li, Li Zhang
arXiv preprint, 22 Feb, 2024
[arXiv]
CMC: Few-shot Novel View Synthesis via Cross-view Multiplane Consistency
Hanxin Zhu, Tianyu He, Zhibo Chen
IEEE VR 2024, 26 Feb 2024
[arXiv]
DreamUp3D: Object-Centric Generative Models for Single-View 3D Scene Understanding and Real-to-Sim Transfer
Yizhe Wu, Haitz Sáez de Ocáriz Borde, Jack Collins, Oiwi Parker Jones, Ingmar Posner
arXiv preprint, 26 Feb 2024
[arXiv]
Depth-Guided Robust and Fast Point Cloud Fusion NeRF for Sparse Input Views
Shuai Guo, Qiuwen Wang, Yijie Gao, Rong Xie, Li Song
arXiv preprint, 4 Mar 2024
[arXiv]
Is Vanilla MLP in Neural Radiance Field Enough for Few-shot View Synthesis?
Hanxin Zhu, Tianyu He, Xin Li, Bingchen Li, Zhibo Chen
CVPR 2024, 10 Mar 2024
[arXiv]
FSViewFusion: Few-Shots View Generation of Novel Objects
Rukhshanda Hussain, Hui Xian Grace Lim, Borchun Chen, Mubarak Shah, Ser Nam Lim
arXiv preprint, 11 Mar 2024
[arXiv]
CVT-xRF: Contrastive In-Voxel Transformer for 3D Consistent Radiance Fields from Sparse Inputs
Yingji Zhong, Lanqing Hong, Zhenguo Li, Dan Xu
CVPR 2024, 25 Mar 2024
[arXiv] [Project]
UPNeRF: A Unified Framework for Monocular 3D Object Reconstruction and Pose Estimation
Yuliang Guo, Abhinav Kumar, Cheng Zhao, Ruoyu Wang, Xinyu Huang, Liu Ren
arXiv preprint, 23 Mar 2024
[arXiv]
Stable Surface Regularization for Fast Few-Shot NeRF
Byeongin Joung, Byeong-Uk Lee, Jaesung Choe, Ukcheol Shin, Minjun Kang, Taeyeop Lee, In So Kweon, Kuk-Jin Yoon
3DV 2024, 29 Mar 2024
[arXiv]
SGCNeRF: Few-Shot Neural Rendering via Sparse Geometric Consistency Guidance
Yuru Xiao, Xianming Liu, Deming Zhai, Kui Jiang, Junjun Jiang, Xiangyang Ji
arXiv preprint, 1 Apr 2024
[arXiv]
Know Your Neighbors: Improving Single-View Reconstruction via Spatial Vision-Language Reasoning
Rui Li, Tobias Fischer, Mattia Segu, Marc Pollefeys, Luc Van Gool, Federico Tombari
CVPR 2024, 4 Apr 2024
[arXiv] [Project] [Code]
Boosting Self-Supervision for Single-View Scene Completion via Knowledge Distillation
Keonhee Han, Dominik Muhle, Felix Wimbauer, Daniel Cremers
arXiv preprint, 11 Apr 2024
[arXiv]
NeRFG-NeRF: Geometry-enhanced Novel View Synthesis from Single-View Images
Zixiong Huang, Qi Chen, Libo Sun, Yifan Yang, Naizhou Wang, Mingkui Tan, Qi Wu
CVPR 2024, 11 Apr 2024
[arXiv]
Simple-RF: Regularizing Sparse Input Radiance Fields with Simpler Solutions
Nagabhushan Somraj, Adithyan Karanayil, Sai Harsha Mupparaju, Rajiv Soundararajan
arXiv preprint, 29 Apr 2024
[arXiv] [Project]
GDGS: Gradient Domain Gaussian Splatting for Sparse Representation of Radiance Fields
Yuanhao Gong
arXiv preprint, 8 May 2024
[arXiv]
Synergistic Integration of Coordinate Network and Tensorial Feature for Improving Neural Radiance Fields from Sparse Inputs
Mingyu Kim, Jun-Seong Kim, Se-Young Yun, Jin-Hwa Kim
ICML 2024, 13 May 2024
[arXiv] [Project] [Code]
Spatial Annealing Smoothing for Efficient Few-shot Neural Rendering
Yuru Xiao, Xianming Liu, Deming Zhai, Kui Jiang, Junjun Jiang, Xiangyang Ji
arXiv preprint, 12 Jun 2024
[arXiv]
M-LRM: Multi-view Large Reconstruction Model
Mengfei Li, Xiaoxiao Long, Yixun Liang, Weiyu Li, Yuan Liu, Peng Li, Xiaowei Chi, Xingqun Qi, Wei Xue, Wenhan Luo, Qifeng Liu, Yike Guo
arXiv preprint, 11 Jun 2024
[arXiv] [Project]
GTR: Improving Large 3D Reconstruction Models through Geometry and Texture Refinement
Peiye Zhuang, Songfang Han, Chaoyang Wang, Aliaksandr Siarohin, Jiaxu Zou, Michael Vasilkovsky, Vladislav Shakhrai, Sergey Korolev, Sergey Tulyakov, Hsin-Ying Lee
arXiv preprint, 9 Jun 2024
[arXiv] [Project]
Enhancing Neural Radiance Fields with Depth and Normal Completion Priors from Sparse Views
Jiawei Guo, HungChyun Chou, Ning Ding
arXiv preprint, 8 Jul 2024
[arXiv]
MomentsNeRF: Leveraging Orthogonal Moments for Few-Shot Neural Rendering
Ahmad AlMughrabi, Ricardo Marques, Petia Radeva
arXiv preprint, 2 Jul 2024
[arXiv] [Code]
InfoNorm: Mutual Information Shaping of Normals for Sparse-View Reconstruction
Xulong Wang, Siyan Dong, Youyi Zheng, Yanchao Yang
ECCV 2024, 17 Jul 2024
[arXiv] [Code]
FewShotNeRF: Meta-Learning-based Novel View Synthesis for Rapid Scene-Specific Adaptation
Piraveen Sivakumar, Paul Janson, Jathushan Rajasegaran, Thanuja Ambegoda
arXiv preprint, 9 Aug 2024
[arXiv]
SSNeRF: Sparse View Semi-supervised Neural Radiance Fields with Augmentation
Xiao Cao, Beibei Lin, Bo Wang, Zhiyong Huang, Robby T. Tan
arXiv preprint, 17 Aug 2024
[arXiv]
GeoTransfer : Generalizable Few-Shot Multi-View Reconstruction via Transfer Learning
Shubhendu Jena, Franck Multon, Adnane Boukhayma
arXiv preprint, 27 Aug 2024
[arXiv]
Generic Objects as Pose Probes for Few-Shot View Synthesis
Zhirui Gao, Renjiao Yi, Chenyang Zhu, Ke Zhuang, Wei Chen, Kai Xu
arXiv preprint, 29 Aug 2024
[arXiv] [Project]
Toward General Object-level Mapping from Sparse Views with 3D Diffusion Priors
Ziwei Liao, Binbin Xu, Steven L. Waslander
CoRL 2024, 7 Oct 2024
[arXiv] [Code]
Few-shot NeRF by Adaptive Rendering Loss Regularization
Qingshan Xu, Xuanyu Yi, Jianyao Xu, Wenbing Tao, Yew-Soon Ong, Hanwang Zhang
ECCV 2024, 23 Oct 2024
[arXiv]
FrugalNeRF: Fast Convergence for Few-shot Novel View Synthesis without Learned Priors
Chin-Yang Lin, Chung-Ho Wu, Chang-Han Yeh, Shih-Han Yen, Cheng Sun, Yu-Lun Liu
arXiv preprint, 21 Oct 2024
[arXiv] [Project]
NeRFs to Gaussian Splats, and Back
Siming He, Zach Osman, Pratik Chaudhari
arXiv preprint, 15 May 2024
[arXiv]
How NeRFs and 3D Gaussian Splatting are Reshaping SLAM: a Survey
Fabio Tosi, Youmin Zhang, Ziren Gong, Erik Sandström, Stefano Mattoccia, Martin R. Oswald, Matteo Poggi
arXiv preprint, 20 Feb 2024
[arXiv]
🔥NICE-SLAM: Neural Implicit Scalable Encoding for SLAM
Zihan Zhu, Songyou Peng, Viktor Larsson, Weiwei Xu, Hujun Bao, Zhaopeng Cui, Martin R. Oswald, Marc Pollefeys
CVPR 2022, 22 Dec 2021
Abstract
Neural implicit representations have recently shown encouraging results in various domains, including promising progress in simultaneous localization and mapping (SLAM). Nevertheless, existing methods produce over-smoothed scene reconstructions and have difficulty scaling up to large scenes. These limitations are mainly due to their simple fully-connected network architecture that does not incorporate local information in the observations. In this paper, we present NICE-SLAM, a dense SLAM system that incorporates multi-level local information by introducing a hierarchical scene representation. Optimizing this representation with pre-trained geometric priors enables detailed reconstruction on large indoor scenes. Compared to recent neural implicit SLAM systems, our approach is more scalable, efficient, and robust. Experiments on five challenging datasets demonstrate competitive results of NICE-SLAM in both mapping and tracking quality. Project page: this https URL[arXiv] [Project] [Github] [Notes]
H2-Mapping: Real-time Dense Mapping Using Hierarchical Hybrid Representation
Chenxing Jiang, Hanwen Zhang, Peize Liu, Zehuan Yu, Hui Cheng, Boyu Zhou, Shaojie Shen
IEEE Robotics and Automation Letters, 5 Jun 2023
[arXiv] [Github] [Video]
PNeRFLoc: Visual Localization with Point-based Neural Radiance Fields
Boming Zhao, Luwei Yang, Mao Mao, Hujun Bao, Zhaopeng Cui
AAAI 2024, 17 Dec 2023
[arXiv]
CaLDiff: Camera Localization in NeRF via Pose Diffusion
Rashik Shrestha, Bishad Koju, Abhigyan Bhusal, Danda Pani Paudel, François Rameau
arXiv preprint, 23 Dec 2023
[arXiv]
Hi-Map: Hierarchical Factorized Radiance Field for High-Fidelity Monocular Dense Mapping
Tongyan Hua, Haotian Bai, Zidong Cao, Ming Liu, Dacheng Tao, Lin Wang
arXiv preprint, 6 Jan 2024
[arXiv]
N^3-Mapping: Normal Guided Neural Non-Projective Signed Distance Fields for Large-scale 3D Mapping
Shuangfu Song, Junqiao Zhao, Kai Huang, Jiaye Lin, Chen Ye, Tiantian Feng
arXiv preprint, 7 Jan 2024
[arXiv]
Q-SLAM: Quadric Representations for Monocular SLAM
Chensheng Peng, Chenfeng Xu, Yue Wang, Mingyu Ding, Heng Yang, Masayoshi Tomizuka, Kurt Keutzer, Marco Pavone, Wei Zhan
arXiv preprint, 12 Mar 2024
[arXiv]
Learning Neural Volumetric Pose Features for Camera Localization
Jingyu Lin, Jiaqi Gu, Bojian Wu, Lubin Fan, Renjie Chen, Ligang Liu, Jieping Ye
arXiv preprint, 19 Mar 2024
[arXiv]
DVN-SLAM: Dynamic Visual Neural SLAM Based on Local-Global Encoding
Wenhua Wu, Guangming Wang, Ting Deng, Sebastian Aegidius, Stuart Shanks, Valerio Modugno, Dimitrios Kanoulas, Hesheng Wang
arXiv preprint, 18 Mar 2024
[arXiv]
WSCLoc: Weakly-Supervised Sparse-View Camera Relocalization
Jialu Wang, Kaichen Zhou, Andrew Markham, Niki Trigoni
arXiv preprint, 22 Mar 2024
[arXiv]
NeSLAM: Neural Implicit Mapping and Self-Supervised Feature Tracking With Depth Completion and Denoising
Tianchen Deng, Yanbo Wang, Hongle Xie, Hesheng Wang, Jingchuan Wang, Danwei Wang, Weidong Chen
arXiv preprint, 29 Mar 2024
[arXiv]
VRS-NeRF: Visual Relocalization with Sparse Neural Radiance Field
Fei Xue, Ignas Budvytis, Daniel Olmeda Reino, Roberto Cipolla
arXiv preprint, 14 Apr 2024
[arXiv] [Code]
SLAIM: Robust Dense Neural SLAM for Online Tracking and Mapping
Vincent Cartillier, Grant Schindler, Irfan Essa
arXiv preprint, 17 Apr 2024
[arXiv]
EC-SLAM: Real-time Dense Neural RGB-D SLAM System with Effectively Constrained Global Bundle Adjustment
Guanghao Li, Qi Chen, YuXiang Yan, Jian Pu
arXiv preprint, 20 Apr 2024
[arXiv] [Code]
S3-SLAM: Sparse Tri-plane Encoding for Neural Implicit SLAM
Zhiyao Zhang, Yunzhou Zhang, Yanmin Wu, Bin Zhao, Xingshuo Wang, Rui Tian
arXiv preprint, 28 Apr 2024
[arXiv]
Fast Global Localization on Neural Radiance Field
Mangyu Kong, Seongwon Lee, Jaewon Lee, Euntai Kim
arXiv preprint, 18 Jun 2024
[arXiv]
I^2-SLAM: Inverting Imaging Process for Robust Photorealistic Dense SLAM
Gwangtak Bae, Changwoon Choi, Hyeongjun Heo, Sang Min Kim, Young Min Kim
ECCV 2024, 16 Jul 2024
[arXiv]
Evaluating geometric accuracy of NeRF reconstructions compared to SLAM method
Adam Korycki, Colleen Josephson, Steve McGuire
arXiv preprint, 15 Jul 2024
[arXiv]
Visual Localization in 3D Maps: Comparing Point Cloud, Mesh, and NeRF Representations
Lintong Zhang, Yifu Tao, Jiarong Lin, Fu Zhang, Maurice Fallon
arXiv preprint, 21 Aug 2024
[arXiv]
Neural Implicit Representation for Highly Dynamic LiDAR Mapping and Odometry
Qi Zhang, He Wang, Ru Li, Wenbin Li
arXiv preprint, 26 Sep 2024
[arXiv]
INeRF: Inverting Neural Radiance Fields for Pose Estimation
Lin Yen-Chen, Pete Florence, Jonathan T. Barron, Alberto Rodriguez, Phillip Isola, Tsung-Yi Lin
IROS 2021, 10 Dec 2020
[arXiv] [Github]
🔥NeRF--: Neural Radiance Fields Without Known Camera Parameters
Zirui Wang, Shangzhe Wu, Weidi Xie, Min Chen, Victor Adrian Prisacariu
arXiv preprint, 14 Feb 2021
Abstract
Considering the problem of novel view synthesis (NVS) from only a set of 2D images, we simplify the training process of Neural Radiance Field (NeRF) on forward-facing scenes by removing the requirement of known or pre-computed camera parameters, including both intrinsics and 6DoF poses. To this end, we propose NeRF−−, with three contributions: First, we show that the camera parameters can be jointly optimised as learnable parameters with NeRF training, through a photometric reconstruction; Second, to benchmark the camera parameter estimation and the quality of novel view renderings, we introduce a new dataset of path-traced synthetic scenes, termed as Blender Forward-Facing Dataset (BLEFF); Third, we conduct extensive analyses to understand the training behaviours under various camera motions, and show that in most scenarios, the joint optimisation pipeline can recover accurate camera parameters and achieve comparable novel view synthesis quality as those trained with COLMAP pre-computed camera parameters. Our code and data are available at this https URL.iMAP: Implicit Mapping and Positioning in Real-Time
Edgar Sucar, Shikun Liu, Joseph Ortiz, Andrew J. Davison
ICCV 2021, 23 Mar 2021
[arXiv] [Project]
GNeRF: GAN-based Neural Radiance Field without Posed Camera
Quan Meng, Anpei Chen, Haimin Luo, Minye Wu, Hao Su, Lan Xu, Xuming He, Jingyi Yu
ICCV 2021, 29 Mar 2021
[arXiv] [Github]
🔥BARF: Bundle-Adjusting Neural Radiance Fields
Chen-Hsuan Lin, Wei-Chiu Ma, Antonio Torralba, Simon Lucey
ICCV 2021, 13 Apr 2021
Abstract
Neural Radiance Fields (NeRF) have recently gained a surge of interest within the computer vision community for its power to synthesize photorealistic novel views of real-world scenes. One limitation of NeRF, however, is its requirement of accurate camera poses to learn the scene representations. In this paper, we propose Bundle-Adjusting Neural Radiance Fields (BARF) for training NeRF from imperfect (or even unknown) camera poses -- the joint problem of learning neural 3D representations and registering camera frames. We establish a theoretical connection to classical image alignment and show that coarse-to-fine registration is also applicable to NeRF. Furthermore, we show that naïvely applying positional encoding in NeRF has a negative impact on registration with a synthesis-based objective. Experiments on synthetic and real-world data show that BARF can effectively optimize the neural scene representations and resolve large camera pose misalignment at the same time. This enables view synthesis and localization of video sequences from unknown camera poses, opening up new avenues for visual localization systems (e.g. SLAM) and potential applications for dense 3D mapping and reconstruction.Self-Calibrating Neural Radiance Fields
Yoonwoo Jeong, Seokjun Ahn, Christopher Choy, Animashree Anandkumar, Minsu Cho, Jaesik Park
ICCV 2021, 13 Apr 2021
[arXiv] [Project] [Github]
Local-to-Global Registration for Bundle-Adjusting Neural Radiance Fields
Yue Chen, Xingyu Chen, Xuan Wang, Qi Zhang, Yu Guo, Ying Shan, Fei Wang
CVPR 2023, 21 Nov 2022
[arXiv] [Project] [Github]
🔥NoPe-NeRF: Optimising Neural Radiance Field with No Pose Prior
Wenjing Bian, Zirui Wang, Kejie Li, Jia-Wang Bian, Victor Adrian Prisacariu
CVPR 2023, 14 Dec 2022
Abstract
Training a Neural Radiance Field (NeRF) without pre-computed camera poses is challenging. Recent advances in this direction demonstrate the possibility of jointly optimising a NeRF and camera poses in forward-facing scenes. However, these methods still face difficulties during dramatic camera movement. We tackle this challenging problem by incorporating undistorted monocular depth priors. These priors are generated by correcting scale and shift parameters during training, with which we are then able to constrain the relative poses between consecutive frames. This constraint is achieved using our proposed novel loss functions. Experiments on real-world indoor and outdoor scenes show that our method can handle challenging camera trajectories and outperforms existing methods in terms of novel view rendering quality and pose estimation accuracy. Our project page is this https URL.Towards Open World NeRF-Based SLAM
Daniil Lisus, Connor Holmes, Steven Waslander
CRV 2023, 8 Jan 2023
[arXiv]
F2-NeRF: Fast Neural Radiance Field Training with Free Camera Trajectories
Peng Wang, Yuan Liu, Zhaoxi Chen, Lingjie Liu, Ziwei Liu, Taku Komura, Christian Theobalt, Wenping Wang
CVPR 2023, 28 Mar 2023
[arXiv] [Project] [Github]
Neural Lens Modeling
Wenqi Xian, Aljaž Božič, Noah Snavely, Christoph Lassner
CVPR 2023, 10 Apr 2023
[arXiv] [Project]
LU-NeRF: Scene and Pose Estimation by Synchronizing Local Unposed NeRFs
Zezhou Cheng, Carlos Esteves, Varun Jampani, Abhishek Kar, Subhransu Maji, Ameesh Makadia
arXiv preprint, 8 Jun 2023
[arXiv] [Project]
CamP: Camera Preconditioning for Neural Radiance Fields
Keunhong Park, Philipp Henzler, Ben Mildenhall, Jonathan T. Barron, Ricardo Martin-Brualla
Siggraph Asia 2023, 21 Aug, 2023
[arXiv] [Project]
MC-NeRF: Muti-Camera Neural Radiance Fields for Muti-Camera Image Acquisition Systems
Yu Gao, Lutong Su, Hao Liang, Yufeng Yue, Yi Yang, Mengyin Fu
arXiv preprint, 14 Sep 2023
[arXiv] [Github]
BID-NeRF: RGB-D image pose estimation with inverted Neural Radiance Fields
Ágoston István Csehi, Csaba Máté Józsa
Nerf4ADR of ICCV 2023, 5 Oct 2023
[arXiv]
NeRFtrinsic Four: An End-To-End Trainable NeRF Jointly Optimizing Diverse Intrinsic and Extrinsic Camera Parameters
Hannah Schieber, Fabian Deuser, Bernhard Egger, Norbert Oswald, Daniel Roth
arXiv preprint, 26 Oct 2023
[arXiv]
PoRF: Pose Residual Field for Accurate Neural Surface Reconstruction
Jia-Wang Bian, Wenjing Bian, Victor Adrian Prisacariu, Philip Torr
arXiv preprint, 11 Oct 2023
[arXiv]
CBARF: Cascaded Bundle-Adjusting Neural Radiance Fields from Imperfect Camera Poses
Hongyu Fu, Xin Yu, Lincheng Li, Li Zhang
arXiv preprint, 15 Oct 2023
[arXiv]
Continuous Pose for Monocular Cameras in Neural Implicit Representation
Qi Ma, Danda Pani Paudel, Ajad Chhatkuli, Luc Van Gool
arXiv preprint, 28 Nov 2023
[arXiv]
IL-NeRF: Incremental Learning for Neural Radiance Fields with Camera Pose Alignment
Letian Zhang, Ming Li, Chen Chen, Jie Xu
arXiv preprint, 10 Dec 2023
[arXiv]
Unifying Correspondence, Pose and NeRF for Pose-Free Novel View Synthesis from Stereo Pairs
Sunghwan Hong, Jaewoo Jung, Heeseong Shin, Jiaolong Yang, Seungryong Kim, Chong Luo
CVPR 2024, 12 Dec 2023
[arXiv] [Project] [Code]
Improving Robustness for Joint Optimization of Camera Poses and Decomposed Low-Rank Tensorial Radiance Fields
Bo-Yu Cheng, Wei-Chen Chiu, Yu-Lun Liu
AAAI 2024, 20 Feb 2024
[arXiv] [Project] [Code] [Video]
IFFNeRF: Initialisation Free and Fast 6DoF pose estimation from a single image and a NeRF model
Matteo Bortolon, Theodore Tsesmelis, Stuart James, Fabio Poiesi, Alessio Del Bue
ICRA 2024, 19 Mar 2024
[arXiv] [Project] [Code] [Video]
VF-NeRF: Viewshed Fields for Rigid NeRF Registration
Leo Segre, Shai Avidan
arXiv preprint, 4 Apr 2024
[arXiv]
CT-NeRF: Incremental Optimizing Neural Radiance Field and Poses with Complex Trajectory
Yunlong Ran, Yanxu Li, Qi Ye, Yuchi Huo, Zechun Bai, Jiahao Sun, Jiming Chen
arXiv preprint, 22 Apr 2024
[arXiv]
TD-NeRF: Novel Truncated Depth Prior for Joint Camera Pose and Neural Radiance Field Optimization
Zhen Tan, Zongtan Zhou, Yangbing Ge, Zi Wang, Xieyuanli Chen, Dewen Hu
arXiv preprint, 11 May 2024
[arXiv] [Code]
Leveraging Neural Radiance Fields for Pose Estimation of an Unknown Space Object during Proximity Operations
Antoine Legrand, Renaud Detry, Christophe De Vleeschouwer
arXiv preprint, 21 May 2024
[arXiv]
Camera Relocalization in Shadow-free Neural Radiance Fields
Shiyao Xu, Caiyun Liu, Yuantao Chen, Zhenxin Zhu, Zike Yan, Yongliang Shi, Hao Zhao, Guyue Zhou
ICRA 2024, 23 May 2024
[arXiv] [Code]
SG-NeRF: Neural Surface Reconstruction with Scene Graph Optimization
Yiyang Chen, Siyan Dong, Xulong Wang, Lulu Cai, Youyi Zheng, Yanchao Yang
ECCV 2024, 17 Jul 2024
[arXiv] [Code]
Invertible Neural Warp forNeRF
Shin-Fang Chng, Ravi Garg, Hemanth Saratchandran, Simon Lucey
ECCV 2024, 17 Jul 2024
[arXiv] [Project] [Code]
🔥TrackNeRF: Bundle Adjusting NeRF from Sparse and Noisy Views via Feature Tracks
Jinjie Mai, Wenxuan Zhu, Sara Rojas, Jesus Zarzar, Abdullah Hamdi, Guocheng Qian, Bing Li, Silvio Giancola, Bernard Ghanem
ECCV 2024, 20 Aug 2024
Abstract
Neural radiance fields (NeRFs) generally require many images with accurate poses for accurate novel view synthesis, which does not reflect realistic setups where views can be sparse and poses can be noisy. Previous solutions for learning NeRFs with sparse views and noisy poses only consider local geometry consistency with pairs of views. Closely following \textit{bundle adjustment} in Structure-from-Motion (SfM), we introduce TrackNeRF for more globally consistent geometry reconstruction and more accurate pose optimization. TrackNeRF introduces \textit{feature tracks}, \ie connected pixel trajectories across \textit{all} visible views that correspond to the \textit{same} 3D points. By enforcing reprojection consistency among feature tracks, TrackNeRF encourages holistic 3D consistency explicitly. Through extensive experiments, TrackNeRF sets a new benchmark in noisy and sparse view reconstruction. In particular, TrackNeRF shows significant improvements over the state-of-the-art BARF and SPARF by ∼8 and ∼1 in terms of PSNR on DTU under various sparse and noisy view setups. The code is available at \href{this https URL}.KRONC: Keypoint-based Robust Camera Optimization for 3D Car Reconstruction
Davide Di Nucci, Alessandro Simoni, Matteo Tomei, Luca Ciuffreda, Roberto Vezzani, Rita Cucchiara
ECCVW 2024, 9 Sep 2024
[arXiv]
Robust SG-NeRF: Robust Scene Graph Aided Neural Surface Reconstruction
Yi Gu, Dongjun Ye, Zhaorui Wang, Jiaxu Wang, Jiahang Cao, Renjing Xu
arXiv preprint, 20 Nov 2024
[arXiv] [Project]
BoostMVSNeRFs: Boosting MVS-based NeRFs to Generalizable View Synthesis in Large-scale Scenes
Chih-Hai Su, Chih-Yao Hu, Shr-Ruei Tsai, Jie-Ying Lee, Chin-Yang Lin, Yu-Lun Liu
SIGGRAPH 2024, 22 Jul 2024
[arXiv] [Project] [Code]
Generative AI meets 3D: A Survey on Text-to-3D in AIGC Era
Chenghao Li, Chaoning Zhang, Atish Waghwase, Lik-Hang Lee, Francois Rameau, Yang Yang, Sung-Ho Bae, Choong Seon Hong
arXiv preprint, 10 May 2023
[arXiv]
Advances in 3D Generation: A Survey
Xiaoyu Li, Qi Zhang, Di Kang, Weihao Cheng, Yiming Gao, Jingbo Zhang, Zhihao Liang, Jing Liao, Yan-Pei Cao, Ying Shan
arXiv preprint, 31 Jan 2024
[arXiv]
A Survey On Text-to-3D Contents Generation In The Wild
Chenhan Jiang
arXiv preprint, 15 May 2024
[arXiv]
Zero-Shot Text-Guided Object Generation with Dream Fields
Ajay Jain, Ben Mildenhall, Jonathan T. Barron, Pieter Abbeel, Ben Poole
CVPR 2022, 2 Dec 2021
[arXiv] [Project] [Github]
CLIP-NeRF: Text-and-Image Driven Manipulation of Neural Radiance Fields
Can Wang, Menglei Chai, Mingming He, Dongdong Chen, Jing Liao
CVPR 2022, 9 Dec 2021
[arXiv] [Project] [Github]
LaTeRF: Label and Text Driven Object Radiance Fields
Ashkan Mirzaei, Yash Kant, Jonathan Kelly, Igor Gilitschenski
CVPR 2022, 4 Jul 2022
[arXiv] [Project]
DreamFusion: Text-to-3D using 2D Diffusion
Ben Poole, Ajay Jain, Jonathan T. Barron, Ben Mildenhall
arXiv preprint, 29 Sep 2022
[arXiv] [Project] [Unofficial Impl] [threeStudio] [Notes]
Latent-NeRF for Shape-Guided Generation of 3D Shapes and Textures
Gal Metzer, Elad Richardson, Or Patashnik, Raja Giryes, Daniel Cohen-Or
arXiv preprint, 14 Nov 2022
[arXiv] [Github] [threeStudio]
🔥Magic3D: High-Resolution Text-to-3D Content Creation
Chen-Hsuan Lin, Jun Gao, Luming Tang, Towaki Takikawa, Xiaohui Zeng, Xun Huang, Karsten Kreis, Sanja Fidler, Ming-Yu Liu, Tsung-Yi Lin
CVPR 2023, 18 Nov 2022
Abstract
DreamFusion has recently demonstrated the utility of a pre-trained text-to-image diffusion model to optimize Neural Radiance Fields (NeRF), achieving remarkable text-to-3D synthesis results. However, the method has two inherent limitations: (a) extremely slow optimization of NeRF and (b) low-resolution image space supervision on NeRF, leading to low-quality 3D models with a long processing time. In this paper, we address these limitations by utilizing a two-stage optimization framework. First, we obtain a coarse model using a low-resolution diffusion prior and accelerate with a sparse 3D hash grid structure. Using the coarse representation as the initialization, we further optimize a textured 3D mesh model with an efficient differentiable renderer interacting with a high-resolution latent diffusion model. Our method, dubbed Magic3D, can create high quality 3D mesh models in 40 minutes, which is 2x faster than DreamFusion (reportedly taking 1.5 hours on average), while also achieving higher resolution. User studies show 61.7% raters to prefer our approach over DreamFusion. Together with the image-conditioned generation capabilities, we provide users with new ways to control 3D synthesis, opening up new avenues to various creative applications.[arXiv] [Project] [threeStudio] [Notes]
Score Jacobian Chaining: Lifting Pretrained 2D Diffusion Models for 3D Generation
Haochen Wang, Xiaodan Du, Jiahao Li, Raymond A. Yeh, Greg Shakhnarovich
CVPR 2023, 1 Dec 2022
[arXiv] [Project] [Github] [threeStudio]
SparseFusion: Distilling View-conditioned Diffusion for 3D Reconstruction
Zhizhuo Zhou, Shubham Tulsiani
CVPR 2023, 1 Dec 2022
[arXiv] [Project] [Github]
DiffRF: Rendering-Guided 3D Radiance Field Diffusion
Ashkan Mirzaei, Yash Kant, Jonathan Kelly, Igor Gilitschenski
CVPR 2023, 2 Dec 2022
[arXiv] [Project]
🔥Dream3D: Zero-Shot Text-to-3D Synthesis Using 3D Shape Prior and Text-to-Image Diffusion Models
Jiale Xu, Xintao Wang, Weihao Cheng, Yan-Pei Cao, Ying Shan, Xiaohu Qie, Shenghua Gao
CVPR 2023, 28 Dec 2022
Abstract
Recent CLIP-guided 3D optimization methods, such as DreamFields and PureCLIPNeRF, have achieved impressive results in zero-shot text-to-3D synthesis. However, due to scratch training and random initialization without prior knowledge, these methods often fail to generate accurate and faithful 3D structures that conform to the input text. In this paper, we make the first attempt to introduce explicit 3D shape priors into the CLIP-guided 3D optimization process. Specifically, we first generate a high-quality 3D shape from the input text in the text-to-shape stage as a 3D shape prior. We then use it as the initialization of a neural radiance field and optimize it with the full prompt. To address the challenging text-to-shape generation task, we present a simple yet effective approach that directly bridges the text and image modalities with a powerful text-to-image diffusion model. To narrow the style domain gap between the images synthesized by the text-to-image diffusion model and shape renderings used to train the image-to-shape generator, we further propose to jointly optimize a learnable text prompt and fine-tune the text-to-image diffusion model for rendering-style image generation. Our method, Dream3D, is capable of generating imaginative 3D content with superior visual quality and shape accuracy compared to state-of-the-art methods.Text-To-4D Dynamic Scene Generation
Jiale Xu, Xintao Wang, Weihao Cheng, Yan-Pei Cao, Ying Shan, Xiaohu Qie, Shenghua Gao
arXiv preprint, 26 Jan 2023
[arXiv] [Project]
🔥LERF: Language Embedded Radiance Fields
Justin Kerr, Chung Min Kim, Ken Goldberg, Angjoo Kanazawa, Matthew Tancik
arXiv preprint, 16 Mar 2023
Abstract
Humans describe the physical world using natural language to refer to specific 3D locations based on a vast range of properties: visual appearance, semantics, abstract associations, or actionable affordances. In this work we propose Language Embedded Radiance Fields (LERFs), a method for grounding language embeddings from off-the-shelf models like CLIP into NeRF, which enable these types of open-ended language queries in 3D. LERF learns a dense, multi-scale language field inside NeRF by volume rendering CLIP embeddings along training rays, supervising these embeddings across training views to provide multi-view consistency and smooth the underlying language field. After optimization, LERF can extract 3D relevancy maps for a broad range of language prompts interactively in real-time, which has potential use cases in robotics, understanding vision-language models, and interacting with 3D scenes. LERF enables pixel-aligned, zero-shot queries on the distilled 3D CLIP embeddings without relying on region proposals or masks, supporting long-tail open-vocabulary queries hierarchically across the volume. The project website can be found at this https URL .🔥Shap-E: Generating Conditional 3D Implicit Functions
Heewoo Jun, Alex Nichol
arXiv preprint, 3 May 2023
Abstract
We present Shap-E, a conditional generative model for 3D assets. Unlike recent work on 3D generative models which produce a single output representation, Shap-E directly generates the parameters of implicit functions that can be rendered as both textured meshes and neural radiance fields. We train Shap-E in two stages: first, we train an encoder that deterministically maps 3D assets into the parameters of an implicit function; second, we train a conditional diffusion model on outputs of the encoder. When trained on a large dataset of paired 3D and text data, our resulting models are capable of generating complex and diverse 3D assets in a matter of seconds. When compared to Point-E, an explicit generative model over point clouds, Shap-E converges faster and reaches comparable or better sample quality despite modeling a higher-dimensional, multi-representation output space. We release model weights, inference code, and samples at this https URL.🔥3DGen: Triplane Latent Diffusion for Textured Mesh Generation
Anchit Gupta, Wenhan Xiong, Yixin Nie, Ian Jones, Barlas Oğuz
arXiv preprint, 9 Mar 2023
Abstract
Latent diffusion models for image generation have crossed a quality threshold which enabled them to achieve mass adoption. Recently, a series of works have made advancements towards replicating this success in the 3D domain, introducing techniques such as point cloud VAE, triplane representation, neural implicit surfaces and differentiable rendering based training. We take another step along this direction, combining these developments in a two-step pipeline consisting of 1) a triplane VAE which can learn latent representations of textured meshes and 2) a conditional diffusion model which generates the triplane features. For the first time this architecture allows conditional and unconditional generation of high quality textured or untextured 3D meshes across multiple diverse categories in a few seconds on a single GPU. It outperforms previous work substantially on image-conditioned and unconditional generation on mesh quality as well as texture generation. Furthermore, we demonstrate the scalability of our model to large datasets for increased quality and diversity. We will release our code and trained models.[arXiv]
🔥MeshDiffusion: Score-based Generative 3D Mesh Modeling
Zhen Liu, Yao Feng, Michael J. Black, Derek Nowrouzezahrai, Liam Paull, Weiyang Liu
ICLR 2023, 14 Mar 2023
Abstract
We consider the task of generating realistic 3D shapes, which is useful for a variety of applications such as automatic scene generation and physical simulation. Compared to other 3D representations like voxels and point clouds, meshes are more desirable in practice, because (1) they enable easy and arbitrary manipulation of shapes for relighting and simulation, and (2) they can fully leverage the power of modern graphics pipelines which are mostly optimized for meshes. Previous scalable methods for generating meshes typically rely on sub-optimal post-processing, and they tend to produce overly-smooth or noisy surfaces without fine-grained geometric details. To overcome these shortcomings, we take advantage of the graph structure of meshes and use a simple yet very effective generative modeling method to generate 3D meshes. Specifically, we represent meshes with deformable tetrahedral grids, and then train a diffusion model on this direct parametrization. We demonstrate the effectiveness of our model on multiple generative tasks.🔥DreamBooth3D: Subject-Driven Text-to-3D Generation
Amit Raj, Srinivas Kaza, Ben Poole, Michael Niemeyer, Nataniel Ruiz, Ben Mildenhall, Shiran Zada, Kfir Aberman, Michael Rubinstein, Jonathan Barron, Yuanzhen Li, Varun Jampani
arXiv preprint, 23 Mar 2023
Abstract
We present DreamBooth3D, an approach to personalize text-to-3D generative models from as few as 3-6 casually captured images of a subject. Our approach combines recent advances in personalizing text-to-image models (DreamBooth) with text-to-3D generation (DreamFusion). We find that naively combining these methods fails to yield satisfactory subject-specific 3D assets due to personalized text-to-image models overfitting to the input viewpoints of the subject. We overcome this through a 3-stage optimization strategy where we jointly leverage the 3D consistency of neural radiance fields together with the personalization capability of text-to-image models. Our method can produce high-quality, subject-specific 3D assets with text-driven modifications such as novel poses, colors and attributes that are not seen in any of the input images of the subject.🔥Make-It-3D: High-Fidelity 3D Creation from A Single Image with Diffusion Prior
Junshu Tang, Tengfei Wang, Bo Zhang, Ting Zhang, Ran Yi, Lizhuang Ma, Dong Chen
arXiv preprint, 23 Mar 2023
Abstract
In this work, we investigate the problem of creating high-fidelity 3D content from only a single image. This is inherently challenging: it essentially involves estimating the underlying 3D geometry while simultaneously hallucinating unseen textures. To address this challenge, we leverage prior knowledge from a well-trained 2D diffusion model to act as 3D-aware supervision for 3D creation. Our approach, Make-It-3D, employs a two-stage optimization pipeline: the first stage optimizes a neural radiance field by incorporating constraints from the reference image at the frontal view and diffusion prior at novel views; the second stage transforms the coarse model into textured point clouds and further elevates the realism with diffusion prior while leveraging the high-quality textures from the reference image. Extensive experiments demonstrate that our method outperforms prior works by a large margin, resulting in faithful reconstructions and impressive visual quality. Our method presents the first attempt to achieve high-quality 3D creation from a single image for general objects and enables various applications such as text-to-3D creation and texture editing.[arXiv] [Project] [Github] [Notes]
Fantasia3D: Disentangling Geometry and Appearance for High-quality Text-to-3D Content Creation
Rui Chen, Yongwei Chen, Ningxin Jiao, Kui Jia
arXiv preprint, 24 Mar 2023
[arXiv] [Project] [Github] [threeStudio]
DITTO-NeRF: Diffusion-based Iterative Text To Omni-directional 3D Model
Hoigi Seo, Hayeon Kim, Gwanghyun Kim, Se Young Chun
CVPR 2023, 6 Apr 2023
[arXiv] [Project] [Github]
Single-Stage Diffusion NeRF: A Unified Approach to 3D Generation and Reconstruction
Hansheng Chen, Jiatao Gu, Anpei Chen, Wei Tian, Zhuowen Tu, Lingjie Liu, Hao Su
arXiv preprint, 13 Apr 2023
[arXiv] [Project] [Github]
Deceptive-NeRF: Enhancing NeRF Reconstruction using Pseudo-Observations from Diffusion Models
Xinhang Liu, Shiu-hong Kao, Jiaben Chen, Yu-Wing Tai, Chi-Keung Tang
arXiv preprint, 24 May 2023
[arXiv]
🔥ProlificDreamer: High-Fidelity and Diverse Text-to-3D Generation with Variational Score Distillation
Zhengyi Wang, Cheng Lu, Yikai Wang, Fan Bao, Chongxuan Li, Hang Su, Jun Zhu
arXiv preprint, 25 May 2023
Abstract
Score distillation sampling (SDS) has shown great promise in text-to-3D generation by distilling pretrained large-scale text-to-image diffusion models, but suffers from over-saturation, over-smoothing, and low-diversity problems. In this work, we propose to model the 3D parameter as a random variable instead of a constant as in SDS and present variational score distillation (VSD), a principled particle-based variational framework to explain and address the aforementioned issues in text-to-3D generation. We show that SDS is a special case of VSD and leads to poor samples with both small and large CFG weights. In comparison, VSD works well with various CFG weights as ancestral sampling from diffusion models and simultaneously improves the diversity and sample quality with a common CFG weight (i.e., 7.5). We further present various improvements in the design space for text-to-3D such as distillation time schedule and density initialization, which are orthogonal to the distillation algorithm yet not well explored. Our overall approach, dubbed ProlificDreamer, can generate high rendering resolution (i.e., 512×512) and high-fidelity NeRF with rich structure and complex effects (e.g., smoke and drops). Further, initialized from NeRF, meshes fine-tuned by VSD are meticulously detailed and photo-realistic. Project page and codes: this https URL[arXiv] [Project] [Unofficial Implementation]
ATT3D: Amortized Text-to-3D Object Synthesis
Jonathan Lorraine, Kevin Xie, Xiaohui Zeng, Chen-Hsuan Lin, Towaki Takikawa,Nicholas Sharp, Tsung-Yi Lin, Ming-Yu Liu, Sanja Fidler, James Lucas
arXiv preprint, 6 Jun 2023
[Project]
One-2-3-45: Any Single Image to 3D Mesh in 45 Seconds without Per-Shape Optimization
Minghua Liu, Chao Xu, Haian Jin, Linghao Chen, Mukund Varma T, Zexiang Xu, Hao Su
arXiv preprint, 29 Jun 2023
[arXiv] [Project] [Github]
Points-to-3D: Bridging the Gap between Sparse Points and Shape-Controllable Text-to-3D Generation
Chaohui Yu, Qiang Zhou, Jingliang Li, Zhe Zhang, Zhibin Wang, Fan Wang
ACMMM 2023, 26 Jul, 2023
[arXiv]
HD-Fusion: Detailed Text-to-3D Generation Leveraging Multiple Noise Estimation
Jinbo Wu, Xiaobo Gao, Xing Liu, Zhengyang Shen, Chen Zhao, Haocheng Feng, Jingtuo Liu, Errui Ding
arXiv preprint, 30 Jul 2023
[arXiv]
AvatarVerse: High-quality & Stable 3D Avatar Creation from Text and Pose
Huichao Zhang, Bowen Chen, Hao Yang, Liao Qu, Xu Wang, Li Chen, Chao Long, Feida Zhu, Kang Du, Min Zheng
arXiv preprint, 7 Aug 2023
[arXiv] [Project]
Animate124: Animating One Image to 4D Dynamic Scene
Yuyang Zhao, Zhiwen Yan, Enze Xie, Lanqing Hong, Zhenguo Li, Gim Hee Lee
arXiv preprint, 24 Nov 2023
[arXiv] [Project]
Prompt2NeRF-PIL: Fast NeRF Generation via Pretrained Implicit Latent
Jianmeng Liu, Yuyao Zhang, Zeyuan Meng, Yu-Wing Tai, Chi-Keung Tang
arXiv preprint, 5 Dec 2023
[arXiv]
ReconFusion: 3D Reconstruction with Diffusion Priors
Rundi Wu, Ben Mildenhall, Philipp Henzler, Keunhong Park, Ruiqi Gao, Daniel Watson, Pratul P. Srinivasan, Dor Verbin, Jonathan T. Barron, Ben Poole, Aleksander Holynski
CVPR 2024, 5 Dec 2023
[arXiv] [Project]
DreamControl: Control-Based Text-to-3D Generation with 3D Self-Prior
Tianyu Huang, Yihan Zeng, Zhilu Zhang, Wan Xu, Hang Xu, Songcen Xu, Rynson W. H. Lau, Wangmeng Zuo
CVPR 2024, 11 Dec 2023
[arXiv] [Code]
Text-Image Conditioned Diffusion for Consistent Text-to-3D Generation
Yuze He, Yushi Bai, Matthieu Lin, Jenny Sheng, Yubin Hu, Qi Wang, Yu-Hui Wen, Yong-Jin Liu
arXiv preprint, 19 Dec 2023
[arXiv]
Carve3D: Improving Multi-view Reconstruction Consistency for Diffusion Models with RL Finetuning
Desai Xie, Jiahao Li, Hao Tan, Xin Sun, Zhixin Shu, Yi Zhou, Sai Bi, Sören Pirk, Arie E. Kaufman
CVPR 2024, 21 Dec 2023
[arXiv] [Project] [Code]
Inpaint4DNeRF: Promptable Spatio-Temporal NeRF Inpainting with Generative Diffusion Models
Han Jiang, Haosen Sun, Ruoxuan Li, Chi-Keung Tang, Yu-Wing Tai
arXiv preprint, 30 Dec 2023
[arXiv]
SIGNeRF: Scene Integrated Generation for Neural Radiance Fields
Jan-Niklas Dihlmann, Andreas Engelhardt, Hendrik Lensch
arXiv preprint, 3 Jan 2024
[arXiv] [Project] [Code] [Video]
GO-NeRF: Generating Virtual Objects in Neural Radiance Fields
Peng Dai, Feitong Tan, Xin Yu, Yinda Zhang, Xiaojuan Qi
arXiv preprint, 11 Jan 2024
[arXiv] [Project]
Sketch2NeRF: Multi-view Sketch-guided Text-to-3D Generation
Minglin Chen, Longguang Wang, Weihao Yuan, Yukun Wang, Zhe Sheng, Yisheng He, Zilong Dong, Liefeng Bo, Yulan Guo
arXiv preprint, 25 Jan 2024
[arXiv]
ReplaceAnything3D:Text-Guided 3D Scene Editing with Compositional Neural Radiance Fields
Edward Bartrum, Thu Nguyen-Phuoc, Chris Xie, Zhengqin Li, Numair Khan, Armen Avetisyan, Douglas Lanman, Lei Xiao
arXiv preprint, 31 Jan 2024
[arXiv] [Project]
ViewFusion: Learning Composable Diffusion Models for Novel View Synthesis
Bernard Spiegl, Andrea Perin, Stéphane Deny, Alexander Ilin
arXiv preprint, 5 Feb 2024
[arXiv]
VP3D: Unleashing 2D Visual Prompt for Text-to-3D Generation
Yang Chen, Yingwei Pan, Haibo Yang, Ting Yao, Tao Mei
CVPR 2024, 25 Mar 2024
[arXiv] [Project]
FlexiDreamer: Single Image-to-3D Generation with FlexiCubes
Ruowen Zhao, Zhengyi Wang, Yikai Wang, Zihan Zhou, Jun Zhu
arXiv preprint, 1 Apr 2024
[arXiv] [Project] [Code]
SC4D: Sparse-Controlled Video-to-4D Generation and Motion Transfer
Zijie Wu, Chaohui Yu, Yanqin Jiang, Chenjie Cao, Fan Wang, Xiang Bai
arXiv preprint, 4 Apr 2024
[arXiv] [Project] [Code] [Video]
Magic-Boost: Boost 3D Generation with Mutli-View Conditioned Diffusion
Fan Yang, Jianfeng Zhang, Yichun Shi, Bowen Chen, Chenxu Zhang, Huichao Zhang, Xiaofeng Yang, Jiashi Feng, Guosheng Lin
arXiv preprint, 9 Apr 2024
[arXiv]
Enhancing 3D Fidelity of Text-to-3D using Cross-View Correspondences
Seungwook Kim, Kejie Li, Xueqing Deng, Yichun Shi, Minsu Cho, Peng Wang
CVPR 2024, 16 Apr 2024
[arXiv]
MeshLRM: Large Reconstruction Model for High-Quality Mesh
Xinyue Wei, Kai Zhang, Sai Bi, Hao Tan, Fujun Luan, Valentin Deschaintre, Kalyan Sunkavalli, Hao Su, Zexiang Xu
arXiv preprint, 18 Apr 2024
[arXiv] [Project]
SketchDream: Sketch-based Text-to-3D Generation and Editing
Feng-Lin Liu, Hongbo Fu, Yu-Kun Lai, Lin Gao
arXiv preprint, 10 May 2024
[arXiv]
4Diffusion: Multi-view Video Diffusion Model for 4D Generation
Haiyu Zhang, Xinyuan Chen, Yaohui Wang, Xihui Liu, Yunhong Wang, Yu Qiao
arXiv preprint, 31 May 2024
[arXiv] [Project] [Code]
Rethinking Score Distillation as a Bridge Between Image Distributions
David McAllister, Songwei Ge, Jia-Bin Huang, David W. Jacobs, Alexei A. Efros, Aleksander Holynski, Angjoo Kanazawa
arXiv preprint, 13 Jun 2024
[arXiv] [Project]
Preserving Identity with Variational Score for General-purpose 3D Editing
Duong H. Le, Tuan Pham, Aniruddha Kembhavi, Stephan Mandt, Wei-Chiu Ma, Jiasen Lu
arXiv preprint, 13 Jun 2024
[arXiv]
Generative Lifting of Multiview to 3D from Unknown Pose: Wrapping NeRF inside Diffusion
Xin Yuan, Rana Hanocka, Michael Maire
arXiv preprint, 11 Jun 2024
[arXiv]
C3DAG: Controlled 3D Animal Generation using 3D pose guidance
Sandeep Mishra, Oindrila Saha, Alan C. Bovik
arXiv preprint, 11 Jun 2024
[arXiv]
OrientDream: Streamlining Text-to-3D Generation
Yuzhong Huang, Zhong Li, Zhang Chen, Zhiyuan Ren, Guosheng Lin, Fred Morstatter, Yi Xu
arXiv preprint, 14 Jun 2024
[arXiv]
Director3D: Real-world Camera Trajectory and 3D Scene Generation from Text
Xinyang Li, Zhangyu Lai, Linning Xu, Yansong Qu, Liujuan Cao, Shengchuan Zhang, Bo Dai, Rongrong Ji
arXiv preprint, 25 jUN 2024
[arXiv] [Code]
A3D: Does Diffusion Dream about 3D Alignment?
Savva Ignatyev, Nina Konovalova, Daniil Selikhanovych, Nikolay Patakin, Oleg Voynov, Dmitry Senushkin, Alexander Filippov, Anton Konushin, Peter Wonka, Evgeny Burnaev
arXiv preprint, 21 Jun 2024
[arXiv]
DreamDissector: Learning Disentangled Text-to-3D Generation from 2D Diffusion Priors
Zizheng Yan, Jiapeng Zhou, Fanpeng Meng, Yushuang Wu, Lingteng Qiu, Zisheng Ye, Shuguang Cui, Guanying Chen, Xiaoguang Han
ECCV 2024, 23 Jul 2024
[arXiv] [Project]
HOTS3D: Hyper-Spherical Optimal Transport for Semantic Alignment of Text-to-3D Generation
Zezeng Li, Weimin Wang, WenHai Li, Na Lei, Xianfeng Gu
arXiv preprint, 19 Jul 2024
[arXiv]
DreamCatalyst: Fast and High-Quality 3D Editing via Controlling Editability and Identity
Jiwook Kim, Seonho Lee, Jaeyo Shin, Jiho Choi, Hyunjung Shim
arXiv preprint, 16 Jul 2024
[arXiv] [Project]
Local Implicit Ray Function for Generalizable Radiance Field Representation
Xin Huang, Qi Zhang, Ying Feng, Xiaoyu Li, Xuan Wang, Qing Wang
CVPR 2023, 25 Apr 2023
[arXiv] [Project] [Video]
MuRF: Multi-Baseline Radiance Fields
Haofei Xu, Anpei Chen, Yuedong Chen, Christos Sakaridis, Yulun Zhang, Marc Pollefeys, Andreas Geiger, Fisher Yu
CVPR 2024, 7 Dec, 2023
[arXiv] [Project] [Code]
GD^2-NeRF: Generative Detail Compensation via GAN and Diffusion for One-shot Generalizable Neural Radiance Fields
Xiao Pan, Zongxin Yang, Shuai Bai, Yi Yang
arXiv preprint, 1 Jan 2024
[arXiv]
Learning Robust Generalizable Radiance Field with Visibility and Feature Augmented Point Representation
Jiaxu Wang, Ziyi Zhang, Renjing Xu
arXiv preprint, 25 Jan 2024
[arXiv]
Generalizable Novel-View Synthesis using a Stereo Camera
Haechan Lee, Wonjoon Jin, Seung-Hwan Baek, Sunghyun Cho
CVPR 2024, 21 Apr 2024
[arXiv] [Project] [Code]
Geometry-aware Reconstruction and Fusion-refined Rendering for Generalizable Neural Radiance Fields
Tianqi Liu, Xinyi Ye, Min Shi, Zihao Huang, Zhiyu Pan, Zhan Peng, Zhiguo Cao
CVPR 2024, 26 Apr 2024
[arXiv] [Project] [Code]
🔥LaRa: Efficient Large-Baseline Radiance Fields
Anpei Chen, Haofei Xu, Stefano Esposito, Siyu Tang, Andreas Geiger
arXiv preprint, 5 Jul 2024
Abstract
Radiance field methods have achieved photorealistic novel view synthesis and geometry reconstruction. But they are mostly applied in per-scene optimization or small-baseline settings. While several recent works investigate feed-forward reconstruction with large baselines by utilizing transformers, they all operate with a standard global attention mechanism and hence ignore the local nature of 3D reconstruction. We propose a method that unifies local and global reasoning in transformer layers, resulting in improved quality and faster convergence. Our model represents scenes as Gaussian Volumes and combines this with an image encoder and Group Attention Layers for efficient feed-forward reconstruction. Experimental results demonstrate that our model, trained for two days on four GPUs, demonstrates high fidelity in reconstructing 360 deg radiance fields, and robustness to zero-shot and out-of-domain testing. Our project Page: this https URL.[arXiv]
GMT: Enhancing Generalizable Neural Rendering via Geometry-Driven Multi-Reference Texture Transfer
Youngho Yoon, Hyun-Kurl Jang, Kuk-Jin Yoon
ECCV 2024, 1 Oct 2024
[arXiv] [Code]
OPONeRF: One-Point-One NeRF for Robust Neural Rendering
Yu Zheng, Yueqi Duan, Kangfu Zheng, Hongru Yan, Jiwen Lu, Jie Zhou
arXiv preprint, 30 Sep 2024
[arXiv] [Project] [Code]
Towards Degradation-Robust Reconstruction in Generalizable NeRF
Chan Ho Park, Ka Leong Cheng, Zhicheng Wang, Qifeng Chen
arXiv preprint, 18 Nov 2024
[arXiv]
HollowNeRF: Pruning Hashgrid-Based NeRFs with Trainable Collision Mitigation
Xiufeng Xie, Riccardo Gherardi, Zhihong Pan, Stephen Huang
ICCV 2023, 19 Aug 2023
[arXiv]
SPC-NeRF: Spatial Predictive Compression for Voxel Based Radiance Field
Zetian Song, Wenhong Duan, Yuhuai Zhang, Shiqi Wang, Siwei Ma, Wen Gao
arXiv preprint, 26 Feb 2024
[arXiv]
NeRFCodec: Neural Feature Compression Meets Neural Radiance Fields for Memory-Efficient Scene Representation
Sicheng Li, Hao Li, Yiyi Liao, Lu Yu
CVPR 2024, 2 Apr 2024
[arXiv]
How Far Can We Compress Instant-NGP-Based NeRF?
Yihang Chen, Qianyi Wu, Mehrtash Harandi, Jianfei Cai
arXiv preprint, 6 Jun 2024
[arXiv] [Project] [Code]
Explicit_NeRF_QA: A Quality Assessment Database for Explicit NeRF Model Compression
Yuke Xing, Qi Yang, Kaifa Yang, Yilin Xu, Zhu Li
arXiv preprint, 11 Jul 2024
[arXiv]
HPC: Hierarchical Progressive Coding Framework for Volumetric Video
Zihan Zheng, Houqiang Zhong, Qiang Hu, Xiaoyun Zhang, Li Song, Ya Zhang, Yanfeng Wang
arXiv preprint, 12 Jul 2024
[arXiv]
Lagrangian Hashing for Compressed Neural Field Representations
Shrisudhan Govindarajan, Zeno Sambugaro, Akhmedkhan (Ahan)Shabanov, Towaki Takikawa, Daniel Rebain, Weiwei Sun, Nicola Conci, Kwang Moo Yi, Andrea Tagliasacchi
arXiv preprint, 9 Sep 2024
[arXiv] [Project]
Plenoptic PNG: Real-Time Neural Radiance Fields in 150 KB
Jae Yong Lee, Yuqun Wu, Chuhang Zou, Derek Hoiem, Shenlong Wang
arXiv preprint, 24 Sep 2024
[arXiv]
Disentangled Generation and Aggregation for Robust Radiance Fields
Shihe Shen, Huachen Gao, Wangze Xu, Rui Peng, Luyang Tang, Kaiqiang Xiong, Jianbo Jiao, Ronggang Wang
ECCV 2024, 24 Sep 2024
[arXiv] [Project]
GIRAFFE: Representing Scenes as Compositional Generative Neural Feature Fields
Michael Niemeyer, Andreas Geiger
CVPR 2012, 24 Nov 2020
[arXiv] [Project] [Github]
Neural Point-Based Graphics
Kara-Ali Aliev, Artem Sevastopolsky, Maria Kolos, Dmitry Ulyanov, Victor Lempitsky
arXiv preprint, 19 Jun 2019
[arXiv] [Github] [Video]
Multiview Neural Surface Reconstruction by Disentangling Geometry and Appearance
Lior Yariv, Yoni Kasten, Dror Moran, Meirav Galun, Matan Atzmon, Ronen Basri, Yaron Lipman
NeurIPS 2020, 22 Mar 2020
[arXiv] [Project] [Github]
Neural RGB-D Surface Reconstruction
Dejan Azinović, Ricardo Martin-Brualla, Dan B Goldman, Matthias Nießner, Justus Thies
CVPR 2022, 9 Apr 2021
[arXiv] [Project] [Github]
🔥NeuS: Learning Neural Implicit Surfaces by Volume Rendering for Multi-view Reconstruction
Peng Wang, Lingjie Liu, Yuan Liu, Christian Theobalt, Taku Komura, Wenping Wang
NeurIPS 2021, 20 Jun 2021
Abstract
We present a novel neural surface reconstruction method, called NeuS, for reconstructing objects and scenes with high fidelity from 2D image inputs. Existing neural surface reconstruction approaches, such as DVR and IDR, require foreground mask as supervision, easily get trapped in local minima, and therefore struggle with the reconstruction of objects with severe self-occlusion or thin structures. Meanwhile, recent neural methods for novel view synthesis, such as NeRF and its variants, use volume rendering to produce a neural scene representation with robustness of optimization, even for highly complex objects. However, extracting high-quality surfaces from this learned implicit representation is difficult because there are not sufficient surface constraints in the representation. In NeuS, we propose to represent a surface as the zero-level set of a signed distance function (SDF) and develop a new volume rendering method to train a neural SDF representation. We observe that the conventional volume rendering method causes inherent geometric errors (i.e. bias) for surface reconstruction, and therefore propose a new formulation that is free of bias in the first order of approximation, thus leading to more accurate surface reconstruction even without the mask supervision. Experiments on the DTU dataset and the BlendedMVS dataset show that NeuS outperforms the state-of-the-arts in high-quality surface reconstruction, especially for objects and scenes with complex structures and self-occlusion.Volume Rendering of Neural Implicit Surfaces
Lior Yariv, Jiatao Gu, Yoni Kasten, Yaron Lipman
NeurIPS 2021, 22 Jun 2021
[arXiv] [Project] [Github]
HF-NeuS: Improved Surface Reconstruction Using High-Frequency Details
Yiqun Wang, Ivan Skorokhodov, Peter Wonka
NeurIPS 2022, 15 Jun 2022
[arXiv] [Github] [Notes]
NPBG++: Accelerating Neural Point-Based Graphics
Ruslan Rakhimov, Andrei-Timotei Ardelean, Victor Lempitsky, Evgeny Burnaev
CVPR 2022, 24 Mar 2022
[arXiv] [Project] [Github]
NeuS2: Fast Learning of Neural Implicit Surfaces for Multi-view Reconstruction
Yiming Wang, Qin Han, Marc Habermann, Kostas Daniilidis, Christian Theobalt, Lingjie Liu
ICCV 2023, 10 Dec 2022
[arXiv] [Project] [Github]
PET-NeuS: Positional Encoding Tri-Planes for Neural Surfaces
Yiqun Wang, Ivan Skorokhodov, Peter Wonka
CVPR 2023, 9 May 2023
[arXiv] [Github]
NeuManifold: Neural Watertight Manifold Reconstruction with Efficient and High-Quality Rendering Support
Xinyue Wei, Fanbo Xiang, Sai Bi, Anpei Chen, Kalyan Sunkavalli, Zexiang Xu, Hao Su
arXiv preprint, 26 May 2023
[arXiv] [Project]
NeuRIS: Neural Reconstruction of Indoor Scenes Using Normal Priors
Jiepeng Wang, Peng Wang, Xiaoxiao Long, Christian Theobalt, Taku Komura, Lingjie Liu, Wenping Wang
arXiv preprint, 27 Jun 2022
[arXiv] [Project] [Github]
🔥TensoSDF: Roughness-aware Tensorial Representation for Robust Geometry and Material Reconstruction
Jia Li, Lu Wang, Lei Zhang, Beibei Wang
SIGGRAPH 2024, 5 Feb 2024
Abstract
Reconstructing objects with realistic materials from multi-view images is problematic, since it is highly ill-posed. Although the neural reconstruction approaches have exhibited impressive reconstruction ability, they are designed for objects with specific materials (e.g., diffuse or specular materials). To this end, we propose a novel framework for robust geometry and material reconstruction, where the geometry is expressed with the implicit signed distance field (SDF) encoded by a tensorial representation, namely TensoSDF. At the core of our method is the roughness-aware incorporation of the radiance and reflectance fields, which enables a robust reconstruction of objects with arbitrary reflective materials. Furthermore, the tensorial representation enhances geometry details in the reconstructed surface and reduces the training time. Finally, we estimate the materials using an explicit mesh for efficient intersection computation and an implicit SDF for accurate representation. Consequently, our method can achieve more robust geometry reconstruction, outperform the previous works in terms of relighting quality, and reduce 50% training times and 70% inference time.[arXiv]
H2O-SDF: Two-phase Learning for 3D Indoor Reconstruction using Object Surface Fields
Minyoung Park, Mirae Do, YeonJae Shin, Jaeseok Yoo, Jongkwang Hong, Joongrock Kim, Chul Lee
arXiv preprint, 13 Feb 2024
[arXiv]
MonoPatchNeRF: Improving Neural Radiance Fields with Patch-based Monocular Guidance
Yuqun Wu, Jae Yong Lee, Chuhang Zou, Shenlong Wang, Derek Hoiem
arXiv preprint, 12 Apr 2024
[arXiv]
ActiveNeuS: Active 3D Reconstruction using Neural Implicit Surface Uncertainty
Hyunseo Kim, Hyeonseo Yang, Taekyung Kim, YoonSung Kim, Jin-Hwa Kim, Byoung-Tak Zhang
arXiv preprint, 4 May 2024
[arXiv]
RPBG: Towards Robust Neural Point-based Graphics in the Wild
Qingtian Zhu, Zizhuang Wei, Zhongtian Zheng, Yifan Zhan, Zhuyu Yao, Jiawang Zhang, Kejian Wu, Yinqiang Zheng
arXiv preprint, 9 May 2024
[arXiv] [Code]
LDM: Large Tensorial SDF Model for Textured Mesh Generation
Rengan Xie, Wenting Zheng, Kai Huang, Yizheng Chen, Qi Wang, Qi Ye, Wei Chen, Yuchi Huo
arXiv preprint, 23 May 2024
[arXiv]
RaNeuS: Ray-adaptive Neural Surface Reconstruction
Yida Wang, David Joseph Tan, Nassir Navab, Federico Tombari
3DV 2024, 14 Jun 2024
[arXiv]
Neural Geometry Processing via Spherical Neural Surfaces
Romy Williamson, Niloy J. Mitra
arXiv preprint, 10 Jul 2024
[arXiv]
ActiveNeRF: Learning Accurate 3D Geometry by Active Pattern Projection
Jianyu Tao, Changping Hu, Edward Yang, Jing Xu, Rui Chen
arXiv preprint, 13 Aug 2024
[arXiv]
MGFs: Masked Gaussian Fields for Meshing Building based on Multi-View Images
Tengfei Wang, Zongqian Zhan, Rui Xia, Linxia Ji, Xin Wang
arXiv preprint, 6 Aug 2024
[arXiv]
Neural Surface Reconstruction and Rendering for LiDAR-Visual Systems
Jianheng Liu, Chunran Zheng, Yunfei Wan, Bowen Wang, Yixi Cai, Fu Zhang
arXiv preprint, 9 Sep 2024
[arXiv] [Code]
Rethinking Directional Parameterization in Neural Implicit Surface Reconstruction
Zijie Jiang, Tianhan Xu, Hiroharu Kato
ECCV 2024, 11 Sep 2024
[arXiv]
EventNeRF: Neural Radiance Fields from a Single Colour Event Camera
Viktor Rudnev, Mohamed Elgharib, Christian Theobalt, Vladislav Golyanik
CVPR 2023, 23 Jun 2022
[arXiv] [Project] [Github]
Instant-3D: Instant Neural Radiance Field Training Towards On-Device AR/VR 3D Reconstruction
Sixu Li, Chaojian Li, Wenbo Zhu, Boyang (Tony)Yu, Yang (Katie)Zhao, Cheng Wan, Haoran You, Huihong Shi, Yingyan (Celine)Lin
ISCA 2023, 24 Apr 2023
[arXiv]
Robust e-NeRF: NeRF from Sparse & Noisy Events under Non-Uniform Motion
Weng Fei Low, Gim Hee Lee
ICCV 2023, 15 Sep 2023
[arXiv] [Project] [Github]
Deformable Neural Radiance Fields using RGB and Event Cameras
Qi Ma, Danda Pani Paudel, Ajad Chhatkuli, Luc Van Gool
arXiv preprint, 25 Sep 2023
[arXiv]
USB-NeRF: Unrolling Shutter Bundle Adjusted Neural Radiance Fields
Moyang Li, Peng Wang, Lingzhe Zhao, Bangyan Liao, Peidong Liu
arXiv preprint, 4 Oct 2023
[arXiv]
Thermal-NeRF: Neural Radiance Fields from an Infrared Camera
Tianxiang Ye, Qi Wu, Junyuan Deng, Guoqing Liu, Liu Liu, Songpengcheng Xia, Liang Pang, Wenxian Yu, Ling Pei
arXiv preprint, 15 Mar 2024
[arXiv]
URS-NeRF: Unordered Rolling Shutter Bundle Adjustment for Neural Radiance Fields
Bo Xu, Ziao Liu, Mengqi Guo, Jiancheng Li, Gim Hee Li
arXiv preprint, 15 Mar 2024
[arXiv]
Spike-NeRF: Neural Radiance Field Based On Spike Camera
Yijia Guo, Yuanxi Bai, Liwen Hu, Mianzhi Liu, Ziyi Guo, Lei Ma, Tiejun Huang
ICME 2024, 25 Mar 2024
[arXiv]
Mitigating Motion Blur in Neural Radiance Fields with Events and Frames
Marco Cannici, Davide Scaramuzza
CVPR 2024, 28 Mar 2024
[arXiv]
SpikeNVS: Enhancing Novel View Synthesis from Blurry Images via Spike Camera
Gaole Dai, Zhenyu Wang, Qinwen Xu, Wen Cheng, Ming Lu, Boxing Shi, Shanghang Zhang, Tiejun Huang
arXiv preprint, 10 Apr 2024
[arXiv]
Radiance Fields from Photons
Sacha Jungerman, Mohit Gupta
arXiv preprint, 12 Jul 2024
[arXiv]
TeX-NeRF: Neural Radiance Fields from Pseudo-TeX Vision
Chonghao Zhong, Chao Xu
arXiv preprint, 7 Oct 2024
[arXiv]
NeRF-enabled Analysis-Through-Synthesis for ISAR Imaging of Small Everyday Objects with Sparse and Noisy UWB Radar Data
Md Farhan Tasnim Oshim, Albert Reed, Suren Jayasuriya, Tauhidur Rahman
arXiv preprint, 14 Oct 2024
[arXiv]
DirectL: Efficient Radiance Fields Rendering for 3D Light Field Displays
Zongyuan Yang, Baolin Liu, Yingde Song, Yongping Xiong, Lan Yi, Zhaohe Zhang, Xunbo Yu
arXiv preprint, 19 Jul 2024
[arXiv] [Project]
🔥Point-NeRF: Point-based Neural Radiance Fields
Qiangeng Xu, Zexiang Xu, Julien Philip, Sai Bi, Zhixin Shu, Kalyan Sunkavalli, Ulrich Neumann
CVPR 2022, 21 Jan 2022
Abstract
Volumetric neural rendering methods like NeRF generate high-quality view synthesis results but are optimized per-scene leading to prohibitive reconstruction time. On the other hand, deep multi-view stereo methods can quickly reconstruct scene geometry via direct network inference. Point-NeRF combines the advantages of these two approaches by using neural 3D point clouds, with associated neural features, to model a radiance field. Point-NeRF can be rendered efficiently by aggregating neural point features near scene surfaces, in a ray marching-based rendering pipeline. Moreover, Point-NeRF can be initialized via direct inference of a pre-trained deep network to produce a neural point cloud; this point cloud can be finetuned to surpass the visual quality of NeRF with 30X faster training time. Point-NeRF can be combined with other 3D reconstruction methods and handles the errors and outliers in such methods via a novel pruning and growing mechanism. The experiments on the DTU, the NeRF Synthetics , the ScanNet and the Tanks and Temples datasets demonstrate Point-NeRF can surpass the existing methods and achieve the state-of-the-art results.Tetra-NeRF: Representing Neural Radiance Fields Using Tetrahedra
Qiangeng Xu, Zexiang Xu, Julien Philip, Sai Bi, Zhixin Shu, Kalyan Sunkavalli, Ulrich Neumann
arXiv preprint, 19 Apr 2023
[arXiv] [Project] [Github] [Notes]
Neural LiDAR Fields for Novel View Synthesis
Shengyu Huang, Zan Gojcic, Zian Wang, Francis Williams, Yoni Kasten, Sanja Fidler, Konrad Schindler, Or Litany
arXiv preprint, 2 May 2023
[arXiv] [Project] [Notes]
Point2Pix: Photo-Realistic Point Cloud Rendering via Neural Radiance Fields
Tao Hu, Xiaogang Xu, Shu Liu, Jiaya Jia
arXiv preprint, 29 Mar 2023
[arXiv] [Video]
Just Add $100 More: Augmenting NeRF-based Pseudo-LiDAR Point Cloud for Resolving Class-imbalance Problem
Mincheol Chang, Siyeong Lee, Jinkyu Kim, Namil Kim
arXiv preprint, 18 Mar 2024
[arXiv]
GPN: Generative Point-based NeRF
Haipeng Wang
arXiv preprint, 12 Apr 2024
[arXiv]
Hologram: Realtime Holographic Overlays via LiDAR Augmented Reconstruction
Ekansh Agrawal
arXiv preprint, 12 May 2024
[arXiv]
A Probabilistic Formulation of LiDAR Mapping with Neural Radiance Fields
Matthew McDermott, Jason Rife
arXiv preprint, 4 Nov 2024
[arXiv] [Code]
AutoNeRF: Training Implicit Scene Representations with Autonomous Agents
Pierre Marza, Laetitia Matignon, Olivier Simonin, Dhruv Batra, Christian Wolf, Devendra Singh Chaplot
arXiv preprint, 21 Apr 2023
[arXiv] [Project] [Video]
NerfBridge: Bringing Real-time, Online Neural Radiance Field Training to Robotics
Javier Yu, Jun En Low, Keiko Nagami, Mac Schwager
ICRA 2023, 16 May 2023
[arXiv] [Github] [Video]
🔥AD-NeRF: Audio Driven Neural Radiance Fields for Talking Head Synthesis
Yudong Guo, Keyu Chen, Sen Liang, Yong-Jin Liu, Hujun Bao, Juyong Zhang
ICCV 2021, 20 Mar 2021
Abstract
Generating high-fidelity talking head video by fitting with the input audio sequence is a challenging problem that receives considerable attentions recently. In this paper, we address this problem with the aid of neural scene representation networks. Our method is completely different from existing methods that rely on intermediate representations like 2D landmarks or 3D face models to bridge the gap between audio input and video output. Specifically, the feature of input audio signal is directly fed into a conditional implicit function to generate a dynamic neural radiance field, from which a high-fidelity talking-head video corresponding to the audio signal is synthesized using volume rendering. Another advantage of our framework is that not only the head (with hair) region is synthesized as previous methods did, but also the upper body is generated via two individual neural radiance fields. Experimental results demonstrate that our novel framework can (1) produce high-fidelity and natural results, and (2) support free adjustment of audio signals, viewing directions, and background images. Code is available at this https URL.MoFaNeRF: Morphable Facial Neural Radiance Field
Yiyu Zhuang, Hao Zhu, Xusen Sun, Xun Cao
ECCV 2022, 4 Dec 2021
[arXiv] [Project] [Github] [Video]
AutoAvatar: Autoregressive Neural Fields for Dynamic Avatar Modeling
Ziqian Bai, Timur Bagautdinov, Javier Romero, Michael Zollhöfer, Ping Tan, Shunsuke Saito
ECCV 2022, 25 Mar 2022
[arXiv] [Project] [Github] [Video]
UV Volumes for Real-time Rendering of Editable Free-view Human Performance
Yue Chen, Xuan Wang, Xingyu Chen, Qi Zhang, Xiaoyu Li, Yu Guo, Jue Wang, Fei Wang
CVPR 2023, 27 Mar 2022
[arXiv] [Project] [Github]
Learning Dynamic Facial Radiance Fields for Few-Shot Talking Head Synthesis
Shuai Shen, Wanhua Li, Zheng Zhu, Yueqi Duan, Jie Zhou, Jiwen Lu
ECCV 2022, 24 Jul 2022
[arXiv] [Project] [Github] [Video]
EVA3D: Compositional 3D Human Generation from 2D Image Collections
Fangzhou Hong, Zhaoxi Chen, Yushi Lan, Liang Pan, Ziwei Liu
ICLR 2023, 10 Oct 2022
[arXiv] [Project] [Github] [Video]
🔥Rodin: A Generative Model for Sculpting 3D Digital Avatars Using Diffusion
Tengfei Wang, Bo Zhang, Ting Zhang, Shuyang Gu, Jianmin Bao, Tadas Baltrusaitis, Jingjing Shen, Dong Chen, Fang Wen, Qifeng Chen, Baining Guo
arXiv preprint, 12 Dec 2022
Abstract
This paper presents a 3D generative model that uses diffusion models to automatically generate 3D digital avatars represented as neural radiance fields. A significant challenge in generating such avatars is that the memory and processing costs in 3D are prohibitive for producing the rich details required for high-quality avatars. To tackle this problem we propose the roll-out diffusion network (Rodin), which represents a neural radiance field as multiple 2D feature maps and rolls out these maps into a single 2D feature plane within which we perform 3D-aware diffusion. The Rodin model brings the much-needed computational efficiency while preserving the integrity of diffusion in 3D by using 3D-aware convolution that attends to projected features in the 2D feature plane according to their original relationship in 3D. We also use latent conditioning to orchestrate the feature generation for global coherence, leading to high-fidelity avatars and enabling their semantic editing based on text prompts. Finally, we use hierarchical synthesis to further enhance details. The 3D avatars generated by our model compare favorably with those produced by existing generative techniques. We can generate highly detailed avatars with realistic hairstyles and facial hair like beards. We also demonstrate 3D avatar generation from image or text as well as text-guided editability.InstantAvatar: Learning Avatars from Monocular Video in 60 Seconds
Tianjian Jiang, Xu Chen, Jie Song, Otmar Hilliges
arXiv preprint, 20 Dec 2022
[arXiv] [Project] [Github] [Video]
PersonNeRF: Personalized Reconstruction from Photo Collections
Chung-Yi Weng, Pratul P. Srinivasan, Brian Curless, Ira Kemelmacher-Shlizerman
arXiv preprint, 16 Feb 2023
[arXiv] [Project] [Video]
Learning Neural Volumetric Representations of Dynamic Humans in Minutes
Chen Geng, Sida Peng, Zhen Xu, Hujun Bao, Xiaowei Zhou
CVPR 2023, 23 Feb 2023
[arXiv] [Project] [Video]
NeuFace: Realistic 3D Neural Face Rendering from Multi-view Images
Mingwu Zheng, Haiyu Zhang, Hongyu Yang, Di Huang
CVPR 2023, 24 Mar 2023
[arXiv] [Github] [Notes]
🔥HumanRF: High-Fidelity Neural Radiance Fields for Humans in Motion
Mustafa Işık, Martin Rünz, Markos Georgopoulos, Taras Khakhulin, Jonathan Starck, Lourdes Agapito, Matthias Nießner
SIGGRAPH 2023, 10 May 2023
Abstract
Representing human performance at high-fidelity is an essential building block in diverse applications, such as film production, computer games or videoconferencing. To close the gap to production-level quality, we introduce HumanRF, a 4D dynamic neural scene representation that captures full-body appearance in motion from multi-view video input, and enables playback from novel, unseen viewpoints. Our novel representation acts as a dynamic video encoding that captures fine details at high compression rates by factorizing space-time into a temporal matrix-vector decomposition. This allows us to obtain temporally coherent reconstructions of human actors for long sequences, while representing high-resolution details even in the context of challenging motion. While most research focuses on synthesizing at resolutions of 4MP or lower, we address the challenge of operating at 12MP. To this end, we introduce ActorsHQ, a novel multi-view dataset that provides 12MP footage from 160 cameras for 16 sequences with high-fidelity, per-frame mesh reconstructions. We demonstrate challenges that emerge from using such high-resolution data and show that our newly introduced HumanRF effectively leverages this data, making a significant step towards production-level quality novel view synthesis.[arXiv] [Project] [Github] [Video] [Notes]
BlendFields: Few-Shot Example-Driven Facial Modeling
Kacper Kania, Stephan J. Garbin, Andrea Tagliasacchi, Virginia Estellers, Kwang Moo Yi, Julien Valentin, Tomasz Trzciński, Marek Kowalski
arXiv preprint, 12 May 2023
[arXiv] [Project] [Video]
Neural Face Rigging for Animating and Retargeting Facial Meshes in the Wild
Dafei Qin, Jun Saito, Noam Aigerman, Thibault Groueix, Taku Komura
arXiv preprint, 15 May 2023
[arXiv]
NCHO: Unsupervised Learning for Neural 3D Composition of Humans and Objects
Taeksoo Kim, Shunsuke Saito, Hanbyul Joo
arXiv preprint, 23 May 2023
[arXiv] [Project]
FDNeRF: Semantics-Driven Face Reconstruction, Prompt Editing and Relighting with Diffusion Models
Hao Zhang, Yanbo Xu, Tianyuan Dai, Yu-Wing, Tai Chi-Keung Tang
arXiv preprint, 1 Jun 2023
[arXiv] [Github]
PixelHuman: Animatable Neural Radiance Fields from Few Images
Gyumin Shim, Jaeseong Lee, Junha Hyung, Jaegul Choo
arXiv preprint, 18 Jul 2023
[arXiv]
Efficient Region-Aware Neural Radiance Fields for High-Fidelity Talking Portrait Synthesis
Jiahe Li, Jiawei Zhang, Xiao Bai, Jun Zhou, Lin Gu
ICCV 2023, 18 Jul 2023
[arXiv] [Github]
FaceCLIPNeRF: Text-driven 3D Face Manipulation using Deformable Neural Radiance Fields
Sungwon Hwang, Junha Hyung, Daejin Kim, Min-Jung Kim, Jaegul Choo
ICCV 2023, 21 July, 2023
[arXiv]
TransHuman: A Transformer-based Human Representation for Generalizable Neural Human Rendering
Xiao Pan, Zongxin Yang, Jianxin Ma, Chang Zhou, Yi Yang
ICCV 2023, 23 Jul 2023
[arXiv] [Project]
HAvatar: High-fidelity Head Avatar via Facial Model Conditioned Neural Radiance Field
Xiaochen Zhao, Lizhen Wang, Jingxiang Sun, Hongwen Zhang, Jinli Suo, Yebin Liu
arXiv preprint, 29 Sep 2023
[arXiv]
Point-Based Radiance Fields for Controllable Human Motion Synthesis
Haitao Yu, Deheng Zhang, Peiyuan Xie, Tianyi Zhang
arXiv preprint, 5 Oct 2023
[arXiv]
DynVideo-E: Harnessing Dynamic NeRF for Large-Scale Motion- and View-Change Human-Centric Video Editing
Jia-Wei Liu, Yan-Pei Cao, Jay Zhangjie Wu, Weijia Mao, Yuchao Gu, Rui Zhao, Jussi Keppo, Ying Shan, Mike Zheng Shou
arXiv preprint, 16 Oct 2023
[arXiv] [Project]
CosAvatar: Consistent and Animatable Portrait Video Tuning with Text Prompt
Haiyao Xiao, Chenglai Zhong, Xuan Gao, Yudong Guo, Juyong Zhang
arXiv preprint, 30 Nov 2023
[arXiv] [Project]
Artist-Friendly Relightable and Animatable Neural Heads
Yingyan Xu, Prashanth Chandran, Sebastian Weiss, Markus Gross, Gaspard Zoss, Derek Bradley
arXiv preprint, 6 Dec, 2023
[arXiv]
FaceStudio: Put Your Face Everywhere in Seconds
Yuxuan Yan, Chi Zhang, Rui Wang, Yichao Zhou, Gege Zhang, Pei Cheng, Gang Yu, Bin Fu
arXiv preprint, 6 Dec, 2023
[arXiv] [Project] [code]
Identity-Obscured Neural Radiance Fields: Privacy-Preserving 3D Facial Reconstruction
Jiayi Kong, Baixin Xu, Xurui Song, Chen Qian, Jun Luo, Ying He
arXiv preprint, 7 Dec, 2023
[arXiv]
TriHuman : A Real-time and Controllable Tri-plane Representation for Detailed Human Geometry and Appearance Synthesis
Heming Zhu, Fangneng Zhan, Christian Theobalt, Marc Habermann
arXiv preprint, 8 Dec, 2023
[arXiv]
R2-Talker: Realistic Real-Time Talking Head Synthesis with Hash Grid Landmarks Encoding and Progressive Multilayer Conditioning
Zhiling Ye, LiangGuo Zhang, Dingheng Zeng, Quan Lu, Ning Jiang
arXiv preprint, 9 Dec 2023
[arXiv]
SIFU: Side-view Conditioned Implicit Function for Real-world Usable Clothed Human Reconstruction
Zechuan Zhang, Zongxin Yang, Yi Yang
CVPR 2024, 10 Dec 2023
[arXiv] [Project] [Code]
High-Quality Mesh Blendshape Generation from Face Videos via Neural Inverse Rendering
Xin Ming, Jiawei Li, Jingwang Ling, Libo Zhang, Feng Xu
arXiv preprint, 16 Jan 2024
[arXiv]
Tri^2-plane: Volumetric Avatar Reconstruction with Feature Pyramid
Luchuan Song, Pinxin Liu, Lele Chen, Celong Liu, Chenliang Xu
arXiv preprint, 17 Jan 2024
[arXiv]
Template-Free Single-View 3D Human Digitalization with Diffusion-Guided LRM
Zhenzhen Weng, Jingyuan Liu, Hao Tan, Zhan Xu, Yang Zhou, Serena Yeung-Levy, Jimei Yang
arXiv preprint, 22 Jan 2024
[arXiv]
NeRF-AD: Neural Radiance Field with Attention-based Disentanglement for Talking Face Synthesis
Chongke Bi, Xiaoxing Liu, Zhilei Liu
ICASSP 2024, 23 Jan 2024
[arXiv]
Emo-Avatar: Efficient Monocular Video Style Avatar through Texture Rendering
Pinxin Liu, Luchuan Song, Daoan Zhang, Hang Hua, Yunlong Tang, Huaijin Tu, Jiebo Luo, Chenliang Xu
arXiv preprint, 1 Feb 2024
[arXiv]
Learning Dynamic Tetrahedra for High-Quality Talking Head Synthesis
Zicheng Zhang, Ruobing Zheng, Ziwen Liu, Congying Han, Tianqi Li, Meng Wang, Tiande Guo, Jingdong Chen, Bonan Li, Ming Yang
CVPR 2024, 27 Feb, 2024
[arXiv]
DivAvatar: Diverse 3D Avatar Generation with a Single Prompt
Weijing Tao, Biwen Lei, Kunhao Liu, Shijian Lu, Miaomiao Cui, Xuansong Xie, Chunyan Miao
arXiv preprint, 27 Feb 2024
[arXiv]
PKU-DyMVHumans: A Multi-View Video Benchmark for High-Fidelity Dynamic Human Modeling
Xiaoyun Zheng, Liwei Liao, Xufeng Li, Jianbo Jiao, Rongjie Wang, Feng Gao, Shiqi Wang, Ronggang Wang
CVPR 2024, 24 Mar 2024
[arXiv] [[Project](https://pku-dymvhumans.github.io/] [Code] [Video]
MI-NeRF: Learning a Single Face NeRF from Multiple Identities
Aggelina Chatziagapi, Grigorios G. Chrysos, Dimitris Samaras
arXiv preprint, 29 Mar 2024
[arXiv] [Project]
Talk3D: High-Fidelity Talking Portrait Synthesis via Personalized 3D Generative Prior
Jaehoon Ko, Kyusun Cho, Joungbin Lee, Heeji Yoon, Sangmin Lee, Sangjun Ahn, Seungryong Kim
arXiv preprint, 29 Mar 2024
[arXiv] [Project] [Code] [Video]
StructLDM: Structured Latent Diffusion for 3D Human Generation
Tao Hu, Fangzhou Hong, Ziwei Liu
arXiv preprint, 1 Apr 2024
[arXiv] [Project] [Code] [Video]
MagicMirror: Fast and High-Quality Avatar Generation with a Constrained Search Space
Armand Comas-Massagué, Di Qiu, Menglei Chai, Marcel Bühler, Amit Raj, Ruiqi Gao, Qiangeng Xu, Mark Matthews, Paulo Gotardo, Octavia Camps, Sergio Orts-Escolano, Thabo Beeler
arXiv preprint, 1 Apr 2024
[arXiv]
GHNeRF: Learning Generalizable Human Features with Efficient Neural Radiance Fields
Arnab Dey, Di Yang, Rohith Agaram, Antitza Dantcheva, Andrew I. Comport, Srinath Sridhar, Jean Martinet
arXiv preprint, 9 Apr 2024
[arXiv]
HFNeRF: Learning Human Biomechanic Features with Neural Radiance Fields
Arnab Dey, Di Yang, Antitza Dantcheva, Jean Martinet
arXiv preprint, 9 Apr 2024
[arXiv]
ArtNeRF: A Stylized Neural Field for 3D-Aware Cartoonized Face Synthesis
Zichen Tang, Hongyu Yang
arXiv preprint, 21 Apr 2024
[arXiv]
Embedded Representation Learning Network for Animating Styled Video Portrait
Tianyong Wang, Xiangyu Liang, Wangguandong Zheng, Dan Niu, Haifeng Xia, Siyu Xia
arXiv preprint, 29 Apr 2024
[arXiv]
NeRFFaceSpeech: One-shot Audio-driven 3D Talking Head Synthesis via Generative Prior
Gihoon Kim, Kwanggyoon Seo, Sihun Cha, Junyong Noh
arXiv preprint, 9 May 2024
[arXiv]
HINT: Learning Complete Human Neural Representations from Limited Viewpoints
Alessandro Sanvito, Andrea Ramazzina, Stefanie Walz, Mario Bijelic, Felix Heide
arXiv preprint, 30 May 2024
[arXiv]
Representing Animatable Avatar via Factorized Neural Fields
Chunjin Song, Zhijie Wu, Bastian Wandt, Leonid Sigal, Helge Rhodin
arXiv preprint, 2 Jun 2024
[arXiv]
NLDF: Neural Light Dynamic Fields for Efficient 3D Talking Head Generation
Niu Guanchen
arXiv preprint, 17 Jun 2024
[arXiv]
Semantic Communications for 3D Human Face Transmission with Neural Radiance Fields
Guanlin Wu, Zhonghao Lyu, Juyong Zhang, Jie Xu
arXiv preprint, 19 Jul 2024
[arXiv]
Motion-Oriented Compositional Neural Radiance Fields for Monocular Dynamic Human Modeling
Jaehyeok Kim, Dongyoon Wee, Dan Xu
ECCV 2024, 16 Jul 2024
[arXiv] [Project]
🔥S^3D-NeRF: Single-Shot Speech-Driven Neural Radiance Field for High Fidelity Talking Head Synthesis
Dongze Li, Kang Zhao, Wei Wang, Yifeng Ma, Bo Peng, Yingya Zhang, Jing Dong
ECCV 2024, 18 Aug 2024
Abstract
Talking head synthesis is a practical technique with wide applications. Current Neural Radiance Field (NeRF) based approaches have shown their superiority on driving one-shot talking heads with videos or signals regressed from audio. However, most of them failed to take the audio as driven information directly, unable to enjoy the flexibility and availability of speech. Since mapping audio signals to face deformation is non-trivial, we design a Single-Shot Speech-Driven Neural Radiance Field (S^3D-NeRF) method in this paper to tackle the following three difficulties: learning a representative appearance feature for each identity, modeling motion of different face regions with audio, and keeping the temporal consistency of the lip area. To this end, we introduce a Hierarchical Facial Appearance Encoder to learn multi-scale representations for catching the appearance of different speakers, and elaborate a Cross-modal Facial Deformation Field to perform speech animation according to the relationship between the audio signal and different face regions. Moreover, to enhance the temporal consistency of the important lip area, we introduce a lip-sync discriminator to penalize the out-of-sync audio-visual sequences. Extensive experiments have shown that our S^3D-NeRF surpasses previous arts on both video fidelity and audio-lip synchronization.[arXiv]
TalkinNeRF: Animatable Neural Fields for Full-Body Talking Humans
Aggelina Chatziagapi, Bindita Chaudhuri, Amit Kumar, Rakesh Ranjan, Dimitris Samaras, Nikolaos Sarafianos
ECCVW 2024, 25 Sep 2024
[arXiv] [Project]
LightAvatar: Efficient Head Avatar as Dynamic Neural Light Field
Huan Wang, Feitong Tan, Ziqian Bai, Yinda Zhang, Shichen Liu, Qiangeng Xu, Menglei Chai, Anish Prabhu, Rohit Pandey, Sean Fanello, Zeng Huang, Yun Fu
ECCV 2024 CADL Workshop, 26 Sep 2024
[arXiv] [Code]
EG-HumanNeRF: Efficient Generalizable Human NeRF Utilizing Human Prior for Sparse View
Zhaorong Wang, Yoshihiro Kanamori, Yuki Endo
arXiv preprint, 16 Oct 2024
[arXiv] [Code]
🔥Real-time 3D-aware Portrait Video Relighting
Ziqi Cai, Kaiwen Jiang, Shu-Yu Chen, Yu-Kun Lai, Hongbo Fu, Boxin Shi, Lin Gao
CVPR 2024, 24 Oct 2024
Abstract
Synthesizing realistic videos of talking faces under custom lighting conditions and viewing angles benefits various downstream applications like video conferencing. However, most existing relighting methods are either time-consuming or unable to adjust the viewpoints. In this paper, we present the first real-time 3D-aware method for relighting in-the-wild videos of talking faces based on Neural Radiance Fields (NeRF). Given an input portrait video, our method can synthesize talking faces under both novel views and novel lighting conditions with a photo-realistic and disentangled 3D representation. Specifically, we infer an albedo tri-plane, as well as a shading tri-plane based on a desired lighting condition for each video frame with fast dual-encoders. We also leverage a temporal consistency network to ensure smooth transitions and reduce flickering artifacts. Our method runs at 32.98 fps on consumer-level hardware and achieves state-of-the-art results in terms of reconstruction quality, lighting error, lighting instability, temporal consistency and inference speed. We demonstrate the effectiveness and interactivity of our method on various portrait videos with diverse lighting and viewing conditions.Efficient Neural Implicit Representation for 3D Human Reconstruction
Zexu Huang, Sarah Monazam Erfani, Siying Lu, Mingming Gong
arXiv preprint, 23 Oct 2024
[arXiv]
Joker: Conditional 3D Head Synthesis with Extreme Facial Expressions
Malte Prinzler, Egor Zakharov, Vanessa Sklyarova, Berna Kabadayi, Justus Thies
arXiv preprint, 21 Oct 2024
[arXiv] [Project]
🔥NeRF in the Dark: High Dynamic Range View Synthesis from Noisy Raw Images
Ben Mildenhall, Peter Hedman, Ricardo Martin-Brualla, Pratul Srinivasan, Jonathan T. Barron
CVPR 2022, 26 Nov 2021
Abstract
Neural Radiance Fields (NeRF) is a technique for high quality novel view synthesis from a collection of posed input images. Like most view synthesis methods, NeRF uses tonemapped low dynamic range (LDR) as input; these images have been processed by a lossy camera pipeline that smooths detail, clips highlights, and distorts the simple noise distribution of raw sensor data. We modify NeRF to instead train directly on linear raw images, preserving the scene's full dynamic range. By rendering raw output images from the resulting NeRF, we can perform novel high dynamic range (HDR) view synthesis tasks. In addition to changing the camera viewpoint, we can manipulate focus, exposure, and tonemapping after the fact. Although a single raw image appears significantly more noisy than a postprocessed one, we show that NeRF is highly robust to the zero-mean distribution of raw noise. When optimized over many noisy raw inputs (25-200), NeRF produces a scene representation so accurate that its rendered novel views outperform dedicated single and multi-image deep raw denoisers run on the same wide baseline input images. As a result, our method, which we call RawNeRF, can reconstruct scenes from extremely noisy images captured in near-darkness.Deblur-NeRF: Neural Radiance Fields from Blurry Images
Li Ma, Xiaoyu Li, Jing Liao, Qi Zhang, Xuan Wang, Jue Wang, Pedro V. Sander
CVPR 2022, 29 Nov 2021
[arXiv] [Project] [Github]
HDR-NeRF: High Dynamic Range Neural Radiance Fields
Xin Huang, Qi Zhang, Ying Feng, Hongdong Li, Xuan Wang, Qing Wang
CVPR 2022, 29 Nov 2021
[arXiv] [Project] [Github]
NeRF-SR: High-Quality Neural Radiance Fields using Supersampling
Chen Wang, Xian Wu, Yuan-Chen Guo, Song-Hai Zhang, Yu-Wing Tai, Shi-Min Hu
MM 2022, 3 Dec 2021
[arXiv] [Project] [Github]
Aleth-NeRF: Low-light Condition View Synthesis with Concealing Fields
Ziteng Cui, Lin Gu, Xiao Sun, Yu Qiao, Tatsuya Harada
arXiv preprint, 10 Mar 2023
[arXiv] [Project] [Github]
Lighting up NeRF via Unsupervised Decomposition and Enhancement
*Haoyuan Wang, Xiaogang Xu, Ke Xu, Rynson WH. Lau
ICCV 2023, 20 Jul 2023
[arXiv] [Project] [Github]
Sharp-NeRF: Grid-based Fast Deblurring Neural Radiance Fields Using Sharpness Prior
Byeonghyeon Lee, Howoong Lee, Usman Ali, Eunbyung Park
WACV 2024, 1 Jan 2024
[arXiv] [Project] [Code]
RustNeRF: Robust Neural Radiance Field with Low-Quality Images
Mengfei Li, Ming Lu, Xiaofang Li, Shanghang Zhang
arXiv preprint, 6 Jan 2024
[arXiv]
Colorizing Monochromatic Radiance Fields
Yean Cheng, Renjie Wan, Shuchen Weng, Chengxuan Zhu, Yakun Chang, Boxin Shi
AAAI 2024, 19 Feb 2024
[arXiv] [Project] [Code]
SMURF: Continuous Dynamics for Motion-Deblurring Radiance Fields
Jungho Lee, Dogyoon Lee, Minhyeok Lee, Donghyung Kim, Sangyoun Lee
arXiv preprint, 12 Mar 2024
[arXiv] [Code]
DyBluRF: Dynamic Neural Radiance Fields from Blurry Monocular Video
Huiqiang Sun, Xingyi Li, Liao Shen, Xinyi Ye, Ke Xian, Zhiguo Cao
CVPR 2024, 15 Mar 2024
[arXiv] [Project] [Code]
SCINeRF: Neural Radiance Fields from a Snapshot Compressive Image
Yunhao Li, Xiaodong Wang, Ping Wang, Xin Yuan, Peidong Liu
arXiv preprint, 29 Mar 2024
[arXiv] [Code]
DerainNeRF: 3D Scene Estimation with Adhesive Waterdrop Removal
Yunhao Li, Jing Wu, Lingzhe Zhao, Peidong Liu
arXiv preprint, 29 Mar 2024
[arXiv]
IReNe: Instant Recoloring in Neural Radiance Fields
Alessio Mazzucchelli, Adrian Garcia-Garcia, Elena Garces, Fernando Rivas-Manzaneque, Francesc Moreno-Noguer, Adrian Penate-Sanchez
arXiv preprint, 30 May 2024
[arXiv]
Bilateral Guided Radiance Field Processing
Yuehao Wang, Chaoyi Wang, Bingchen Gong, Tianfan Xue
SIGGRAPH 2024, 1 Jun 2024
[arXiv] [Project] [Code] [Video]
Deblurring Neural Radiance Fields with Event-driven Bundle Adjustment
Yunshan Qi, Lin Zhu, Yifan Zhao, Nan Bao, Jia Li
arXiv preprint, 20 Jun 2024
[arXiv]
Style-NeRF2NeRF: 3D Style Transfer From Style-Aligned Multi-View Images
Haruo Fujiwara, Yusuke Mukuta, Tatsuya Harada
arXiv preprint, 19 Jun 2024
[arXiv] [Project]
Sparse-DeRF: Deblurred Neural Radiance Fields from Sparse View
Dogyoon Lee, Donghyeong Kim, Jungho Lee, Minhyeok Lee, Seunghoon Lee, Sangyoun Lee
arXiv preprint, 9 Jul 2024
[arXiv] [Project]
PanDORA: Casual HDR Radiance Acquisition for Indoor Scenes
Mohammad Reza Karimi Dastjerdi, Frédéric Fortier-Chouinard, Yannick Hold-Geoffroy, Marc Hébert, Claude Demers, Nima Kalantari, Jean-François Lalonde
arXiv preprint, 8 Jul 2024
[arXiv]
LSE-NeRF: Learning Sensor Modeling Errors for Deblured Neural Radiance Fields with RGB-Event Stereo
Wei Zhi Tang, Daniel Rebain, Kostantinos G. Derpanis, Kwang Moo Yi
arXiv preprint, 9 Sep 2024
[arXiv] [Code]
Deblur e-NeRF: NeRF from Motion-Blurred Events under High-speed or Low-light Conditions
Weng Fei Low, Gim Hee Lee
ECCV 2024, 26 Sep 2024
[arXiv] [Project] [Code]
LuSh-NeRF: Lighting up and Sharpening NeRFs for Low-light Scenes
Zefan Qu, Ke Xu, Gerhard Petrus Hancke, Rynson W.H. Lau
NeurIPS 2024, 11 Nov 2024
[arXiv] [Code]
ASSR-NeRF: Arbitrary-Scale Super-Resolution on Voxel Grid for High-Quality Radiance Fields Reconstruction
Ding-Jiun Huang, Zi-Ting Chou, Yu-Chiang Frank Wang, Cheng Sun
arXiv preprint, 28 Jun 2024
[arXiv]
3D Reconstruction and New View Synthesis of Indoor Environments based on a Dual Neural Radiance Field
Zhenyu Bao, Guibiao Liao, Zhongyuan Zhao, Kanglin Liu, Qing Li, Guoping Qiu
arXiv preprint, 26 Jan 2024
[arXiv]
NeRF-Det++: Incorporating Semantic Cues and Perspective-aware Depth Supervision for Indoor Multi-View 3D Detection
Chenxi Huang, Yuenan Hou, Weicai Ye, Di Huang, Xiaoshui Huang, Binbin Lin, Deng Cai, Wanli Ouyang
arXiv preprint, 22 Feb 2024
[arXiv] [Code]
VF-NeRF: Learning Neural Vector Fields for Indoor Scene Reconstruction
Albert Gassol Puigjaner, Edoardo Mello Rella, Erik Sandström, Ajad Chhatkuli, Luc Van Gool
arXiv preprint, 16 Aug 2024
[arXiv]
🔥Urban Radiance Fields
Konstantinos Rematas, Andrew Liu, Pratul P. Srinivasan, Jonathan T. Barron, Andrea Tagliasacchi, Thomas Funkhouser, Vittorio Ferrari
CVPR 2022, 29 Nov 2021
Abstract
The goal of this work is to perform 3D reconstruction and novel view synthesis from data captured by scanning platforms commonly deployed for world mapping in urban outdoor environments (e.g., Street View). Given a sequence of posed RGB images and lidar sweeps acquired by cameras and scanners moving through an outdoor scene, we produce a model from which 3D surfaces can be extracted and novel RGB images can be synthesized. Our approach extends Neural Radiance Fields, which has been demonstrated to synthesize realistic novel images for small scenes in controlled settings, with new methods for leveraging asynchronously captured lidar data, for addressing exposure variation between captured images, and for leveraging predicted image segmentations to supervise densities on rays pointing at the sky. Each of these three extensions provides significant performance improvements in experiments on Street View data. Our system produces state-of-the-art 3D surface reconstructions and synthesizes higher quality novel views in comparison to both traditional methods (e.g.~COLMAP) and recent neural representations (e.g.~Mip-NeRF).Hallucinated Neural Radiance Fields in the Wild
Xingyu Chen, Qi Zhang, Xiaoyu Li, Yue Chen, Ying Feng, Xuan Wang, Jue Wang
CVPR 2022, 30 Nov 2021
[arXiv] [Project] [Github]
NeRF for Outdoor Scene Relighting
Viktor Rudnev, Mohamed Elgharib, William Smith, Lingjie Liu, Vladislav Golyanik, Christian Theobalt
ECCV 2022, 9 Dec 2021
[arXiv] [Project] [Github]
BungeeNeRF: Progressive Neural Radiance Field for Extreme Multi-scale Scene Rendering
Viktor Rudnev, Mohamed Elgharib, William Smith, Lingjie Liu, Vladislav Golyanik, Christian Theobalt
ECCV 2022, 10 Dec 2021
[arXiv] [Project] [Github]
🔥Mega-NeRF: Scalable Construction of Large-Scale NeRFs for Virtual Fly-Throughs
Haithem Turki, Deva Ramanan, Mahadev Satyanarayanan
CVPR 2022, 20 Dec 2021
Abstract
We use neural radiance fields (NeRFs) to build interactive 3D environments from large-scale visual captures spanning buildings or even multiple city blocks collected primarily from drones. In contrast to single object scenes (on which NeRFs are traditionally evaluated), our scale poses multiple challenges including (1) the need to model thousands of images with varying lighting conditions, each of which capture only a small subset of the scene, (2) prohibitively large model capacities that make it infeasible to train on a single GPU, and (3) significant challenges for fast rendering that would enable interactive fly-throughs. To address these challenges, we begin by analyzing visibility statistics for large-scale scenes, motivating a sparse network structure where parameters are specialized to different regions of the scene. We introduce a simple geometric clustering algorithm for data parallelism that partitions training images (or rather pixels) into different NeRF submodules that can be trained in parallel. We evaluate our approach on existing datasets (Quad 6k and UrbanScene3D) as well as against our own drone footage, improving training speed by 3x and PSNR by 12%. We also evaluate recent NeRF fast renderers on top of Mega-NeRF and introduce a novel method that exploits temporal coherence. Our technique achieves a 40x speedup over conventional NeRF rendering while remaining within 0.8 db in PSNR quality, exceeding the fidelity of existing fast renderers.🔥Block-NeRF: Scalable Large Scene Neural View Synthesis
Matthew Tancik, Vincent Casser, Xinchen Yan, Sabeek Pradhan, Ben Mildenhall, Pratul P. Srinivasan, Jonathan T. Barron, Henrik Kretzschmar
CVPR 2022, 10 Feb 2022
Abstract
We present Block-NeRF, a variant of Neural Radiance Fields that can represent large-scale environments. Specifically, we demonstrate that when scaling NeRF to render city-scale scenes spanning multiple blocks, it is vital to decompose the scene into individually trained NeRFs. This decomposition decouples rendering time from scene size, enables rendering to scale to arbitrarily large environments, and allows per-block updates of the environment. We adopt several architectural changes to make NeRF robust to data captured over months under different environmental conditions. We add appearance embeddings, learned pose refinement, and controllable exposure to each individual NeRF, and introduce a procedure for aligning appearance between adjacent NeRFs so that they can be seamlessly combined. We build a grid of Block-NeRFs from 2.8 million images to create the largest neural scene representation to date, capable of rendering an entire neighborhood of San Francisco.DisCoScene: Spatially Disentangled Generative Radiance Fields for Controllable 3D-aware Scene Synthesis
Yinghao Xu, Menglei Chai, Zifan Shi, Sida Peng, Ivan Skorokhodov, Aliaksandr Siarohin, Ceyuan Yang, Yujun Shen, Hsin-Ying Lee, Bolei Zhou, Sergey Tulyakov
arXiv preprint, 22 Dec 2022
[arXiv] [Prject] [Github] [Video]
S-NeRF: Neural Radiance Fields for Street Views
Ziyang Xie, Junge Zhang, Wenye Li, Feihu Zhang, Li Zhang
ICLR 2023, 1 Mar 2023
[arXiv] [Prject] [Video]
Progressively Optimized Local Radiance Fields for Robust View Synthesis
Andreas Meuleman, Yu-Lun Liu, Chen Gao, Jia-Bin Huang, Changil Kim, Min H. Kim, Johannes Kopf
CVPR 2023, 24 Mar 2023
[arXiv] [Prject] [Video]
SUDS: Scalable Urban Dynamic Scenes
Haithem Turki, Jason Y. Zhang, Francesco Ferroni, Deva Ramanan
CVPR 2023, 25 Mar 2023
[arXiv] [Prject] [Github] [Video]
Neural Fields meet Explicit Geometric Representations for Inverse Rendering of Urban Scenes
Zian Wang, Tianchang Shen, Jun Gao, Shengyu Huang, Jacob Munkberg, Jon Hasselgren, Zan Gojcic, Wenzheng Chen, Sanja Fidler
CVPR 2023, 6 Apr 2023
[arXiv] [Prject] [Video]
NeuralField-LDM: Scene Generation with Hierarchical Latent Diffusion Models
Seung Wook Kim, Bradley Brown, Kangxue Yin, Karsten Kreis, Katja Schwarz, Daiqing Li, Robin Rombach, Antonio Torralba, Sanja Fidler
CVPR 2023, 19 Apr 2023
[arXiv] [Prject] [Video]
PlaNeRF: SVD Unsupervised 3D Plane Regularization for NeRF Large-Scale Scene Reconstruction
Fusang Wang, Arnaud Louys, Nathan Piasco, Moussab Bennehar, Luis Roldão, Dzmitry Tsishkou
arXiv preprint, 26 May 2023
[arXiv]
Urban Radiance Field Representation with Deformable Neural Mesh Primitives
Fan Lu, Yan Xu, Guang Chen, Hongsheng Li, Kwan-Yee Lin, Changjun Jiang
ICCV 2023, 20 Jul 2023
[arXiv] [Project] [Github]
Federated Learning for Large-Scale Scene Modeling with Neural Radiance Fields
Teppei Suzuki
arXiv preprint, 12 Sep 2023
[arXiv]
UE4-NeRF:Neural Radiance Field for Real-Time Rendering of Large-Scale Scene
Jiaming Gu, Minchao Jiang, Hongsheng Li, Xiaoyuan Lu, Guangming Zhu, Syed Afaq Ali Shah, Liang Zhang, Mohammed Bennamoun
NeurIPS 2023, 20 Oct 2023
[arXiv] [Project] [Github]
PanopticNeRF-360: Panoramic 3D-to-2D Label Transfer in Urban Scenes
Xiao Fu, Shangzhan Zhang, Tianrun Chen, Yichong Lu, Xiaowei Zhou, Andreas Geiger, Yiyi Liao
arXiv preprint, 19 Sep 2023
[arXiv] [Project] [Github]
MMPI: a Flexible Radiance Field Representation by Multiple Multi-plane Images Blending
Yuze He, Peng Wang, Yubin Hu, Wang Zhao, Ran Yi, Yong-Jin Liu, Wenping Wang
arXiv preprint, 30 Sep 2023
[arXiv]
SCALAR-NeRF: SCAlable LARge-scale Neural Radiance Fields for Scene Reconstruction
Yu Chen, Gim Hee Lee
arXiv preprint, 28 Nov 2023
[arXiv] [Project]
EvE: Exploiting Generative Priors for Radiance Field Enrichment
Karim Kassab, Antoine Schnepf, Jean-Yves Franceschi, Laurent Caraffa, Jeremie Mary, Valérie Gouet-Brunet
arXiv preprint, 1 Dec 2023
[arXiv] [Project]
LightSim: Neural Lighting Simulation for Urban Scenes
Ava Pun, Gary Sun, Jingkang Wang, Yun Chen, Ze Yang, Sivabalan Manivasagam, Wei-Chiu Ma, Raquel Urtasun
NeurIPS 2023, 11 Dec, 2023
[arXiv] [Project]
SMERF: Streamable Memory Efficient Radiance Fields for Real-Time Large-Scene Exploration
Daniel Duckworth, Peter Hedman, Christian Reiser, Peter Zhizhin, Jean-François Thibert, Mario Lučić, Richard Szeliski, Jonathan T. Barron
arXir preprint, 12 Dec 2023
[arXiv] [Project] [Video]
City-on-Web: Real-time Neural Rendering of Large-scale Scenes on the Web
Kaiwen Song, Juyong Zhang
arXiv preprint, 27 Dec 2023
[arXiv] [Project]
Global-guided Focal Neural Radiance Field for Large-scale Scene Rendering
Mingqi Shao, Feng Xiong, Hang Zhang, Shuang Yang, Mu Xu, Wei Bian, Xueqian Wang
arXiv preprint, 19 Mar 2024
[arXiv] [Project]
Entity-NeRF: Detecting and Removing Moving Entities in Urban Scenes
Takashi Otonari, Satoshi Ikehata, Kiyoharu Aizawa
CVPR 2024, 24 Mar 2024
[arXiv] [Project]
AG-NeRF: Attention-guided Neural Radiance Fields for Multi-height Large-scale Outdoor Scene Rendering
Jingfeng Guo, Xiaohan Zhang, Baozhu Zhao, Qi Liu
arXiv preprint, 18 Apr 2024
[arXiv]
NeRF On-the-go: Exploiting Uncertainty for Distractor-free NeRFs in the Wild
Weining Ren, Zihan Zhu, Boyang Sun, Jiaqi Chen, Marc Pollefeys, Songyou Peng
CVPR 2024, 29 May 2024
[arXiv] [Project] [Code]
Crowd-Sourced NeRF: Collecting Data from Production Vehicles for 3D Street View Reconstruction
Tong Qin, Changze Li, Haoyang Ye, Shaowei Wan, Minzhen Li, Hongwei Liu, Ming Yang
arXiv preprint, 24 Jun 2024
[arXiv]
Neural Radiance Field in Autonomous Driving: A Survey
Lei He, Leheng Li, Wenchao Sun, Zeyu Han, Yichen Liu, Sifa Zheng, Jianqiang Wang, Keqiang Li
arXiv preprint, 22 Apr 2024
[arXiv]
READ: Large-Scale Neural Scene Rendering for Autonomous Driving
Zhuopeng Li, Lu Li, Zeyu Ma, Ping Zhang, Junbo Chen, Jianke Zhu
AAAI 2023, 11 May 2022
[arXiv] [Github] [Video]
🔥MARS: An Instance-aware, Modular and Realistic Simulator for Autonomous Driving
Zirui Wu, Tianyu Liu, Liyi Luo, Zhide Zhong, Jianteng Chen, Hongmin Xiao, Chao Hou, Haozhe Lou, Yuantao Chen, Runyi Yang, Yuxin Huang, Xiaoyu Ye, Zike Yan, Yongliang Shi, Yiyi Liao, Hao Zhao
CICAI 2023, 27 Jul 2023
Abstract
Nowadays, autonomous cars can drive smoothly in ordinary cases, and it is widely recognized that realistic sensor simulation will play a critical role in solving remaining corner cases by simulating them. To this end, we propose an autonomous driving simulator based upon neural radiance fields (NeRFs). Compared with existing works, ours has three notable features: (1) Instance-aware. Our simulator models the foreground instances and background environments separately with independent networks so that the static (e.g., size and appearance) and dynamic (e.g., trajectory) properties of instances can be controlled separately. (2) Modular. Our simulator allows flexible switching between different modern NeRF-related backbones, sampling strategies, input modalities, etc. We expect this modular design to boost academic progress and industrial deployment of NeRF-based autonomous driving simulation. (3) Realistic. Our simulator set new state-of-the-art photo-realism results given the best module selection. Our simulator will be open-sourced while most of our counterparts are not. Project page: this https URL.[arXiv] [Project] [Code] [Video]
PC-NeRF: Parent-Child Neural Radiance Fields under Partial Sensor Data Loss in Autonomous Driving Environments
Xiuzhong Hu, Guangming Xiong, Zheng Zang, Peng Jia, Yuxuan Han, Junyi Ma
arXiv preprint, 2 Oct 2023
[arXiv] [Github]
UC-NeRF: Neural Radiance Field for Under-Calibrated multi-view cameras in autonomous driving
Kai Cheng, Xiaoxiao Long, Wei Yin, Jin Wang, Zhiqiang Wu, Yuexin Ma, Kaixuan Wang, Xiaozhi Chen, Xuejin Chen
arXiv preprint, 28 Nov 2023
[arXiv] [Project]
DGNR: Density-Guided Neural Point Rendering of Large Driving Scenes
Zhuopeng Li, Chenming Wu, Liangjun Zhang, Jianke Zhu
arXiv preprint, 28 Nov 2023
[arXiv]
Dynamic LiDAR Re-simulation using Compositional Neural Fields
Hanfeng Wu, Xingxing Zuo, Stefan Leutenegger, Or Litany, Konrad Schindler, Shengyu Huang
CVPR 2024, 8 Dec, 2023
[arXiv] [Project] [Code] [Video]
Forging Vision Foundation Models for Autonomous Driving: Challenges, Methodologies, and Opportunities
Xu Yan, Haiming Zhang, Yingjie Cai, Jingming Guo, Weichao Qiu, Bin Gao, Kaiqiang Zhou, Yue Zhao, Huan Jin, Jiantao Gao, Zhen Li, Lihui Jiang, Wei Zhang, Hongbo Zhang, Dengxin Dai, Bingbing Liu
arXiv preprint, 16 Jan 2024
[arXiv] [Code]
CARFF: Conditional Auto-encoded Radiance Field for 3D Scene Forecasting
Jiezhi Yang, Khushi Desai, Charles Packer, Harshil Bhatia, Nicholas Rhinehart, Rowan McAllister, Joseph Gonzalez
arXiv preprint, 31 Jan 2024
[arXiv]
PC-NeRF: Parent-Child Neural Radiance Fields Using Sparse LiDAR Frames in Autonomous Driving Environments
Xiuzhong Hu, Guangming Xiong, Zheng Zang, Peng Jia, Yuxuan Han, Junyi Ma
arXiv preprint, 14 Feb 2024
[arXiv] [Code]
OccFlowNet: Towards Self-supervised Occupancy Estimation via Differentiable Rendering and Occupancy Flow
Simon Boeder, Fabian Gigengack, Benjamin Risse
arXiv preprint, 20 Feb 2024
[arXiv]
Lightning NeRF: Efficient Hybrid Scene Representation for Autonomous Driving
Junyi Cao, Zhichao Li, Naiyan Wang, Chao Ma
ICRA 2024, 9 Mar 2024
[arXiv] [Code]
Are NeRFs ready for autonomous driving? Towards closing the real-to-simulation gap
Carl Lindström, Georg Hess, Adam Lilja, Maryam Fatemi, Lars Hammarstrand, Christoffer Petersson, Lennart Svensson
CVPR 2024, 24 Mar 2024
[arXiv] [Project]
DriveEnv-NeRF: Exploration of A NeRF-Based Autonomous Driving Environment for Real-World Performance Validation
Mu-Yi Shen, Chia-Chi Hsu, Hao-Yu Hou, Yu-Chen Huang, Wei-Fang Sun, Chia-Che Chang, Yu-Lun Liu, Chun-Yi Lee
arXiv preprint, 23 Mar 2024
[arXiv] [Project] [Code] [Video]
NeuroNCAP: Photorealistic Closed-loop Safety Testing for Autonomous Driving
William Ljungbergh, Adam Tonderski, Joakim Johnander, Holger Caesar, Kalle Åström, Michael Felsberg, Christoffer Petersson
arXiv preprint, 11 Apr 2024
[arXiv] [Code]
Searching Realistic-Looking Adversarial Objects For Autonomous Driving Systems
Shengxiang Sun, Shenzhe Zhu
arXiv preprint, 19 May 2024
[arXiv]
HybridOcc: NeRF Enhanced Transformer-based Multi-Camera 3D Occupancy Prediction
Xiao Zhao, Bo Chen, Mingyang Sun, Dingkang Yang, Youxing Wang, Xukun Zhang, Mingcheng Li, Dongliang Kou, Xiaoyi Wei, Lihua Zhang
IEEE RAL, 17 Aug 2024
[arXiv]
LeC^2O-NeRF: Learning Continuous and Compact Large-Scale Occupancy for Urban Scenes
Zhenxing Mi, Dan Xu
18 Nov 2024
[arXiv]
Recolorable Posterization of Volumetric Radiance Fields Using Visibility-Weighted Palette Extraction
Seung Wook Kim, Bradley Brown, Kangxue Yin, Karsten Kreis, Katja Schwarz, Daiqing Li, Robin Rombach, Antonio Torralba, Sanja Fidler
CVPR 2023, 19 Apr 2022
[Paper] [Project] [Github]
ClimateNeRF: Physically-based Neural Rendering for Extreme Climate Synthesis
Yuan Li, Zhi-Hao Lin, David Forsyth, Jia-Bin Huang, Shenlong Wang
CVPR 2023, 19 Apr 2022
[arXiv] [Project] [Video]
NeuMesh: Learning Disentangled Neural Mesh-based Implicit Field for Geometry and Texture Editing
Bangbang Yang, Chong Bao, Junyi Zeng, Hujun Bao, Yinda Zhang, Zhaopeng Cui, Guofeng Zhang
CVPR 2022, 25 Jul 2022
[arXiv] [Project] [Github]
🔥SPIn-NeRF: Multiview Segmentation and Perceptual Inpainting with Neural Radiance Fields
Ashkan Mirzaei, Tristan Aumentado-Armstrong, Konstantinos G. Derpanis, Jonathan Kelly, Marcus A. Brubaker, Igor Gilitschenski, Alex Levinshtein
CVPR 2023, 22 Nov 2022
Abstract
Neural Radiance Fields (NeRFs) have emerged as a popular approach for novel view synthesis. While NeRFs are quickly being adapted for a wider set of applications, intuitively editing NeRF scenes is still an open challenge. One important editing task is the removal of unwanted objects from a 3D scene, such that the replaced region is visually plausible and consistent with its context. We refer to this task as 3D inpainting. In 3D, solutions must be both consistent across multiple views and geometrically valid. In this paper, we propose a novel 3D inpainting method that addresses these challenges. Given a small set of posed images and sparse annotations in a single input image, our framework first rapidly obtains a 3D segmentation mask for a target object. Using the mask, a perceptual optimizationbased approach is then introduced that leverages learned 2D image inpainters, distilling their information into 3D space, while ensuring view consistency. We also address the lack of a diverse benchmark for evaluating 3D scene inpainting methods by introducing a dataset comprised of challenging real-world scenes. In particular, our dataset contains views of the same scene with and without a target object, enabling more principled benchmarking of the 3D inpainting task. We first demonstrate the superiority of our approach on multiview segmentation, comparing to NeRFbased methods and 2D segmentation approaches. We then evaluate on the task of 3D inpainting, establishing state-ofthe-art performance against other NeRF manipulation algorithms, as well as a strong 2D image inpainter baseline. Project Page: this https URL[arXiv] [Project] [Github] [Video] [Notes]
NeRF-Art: Text-Driven Neural Radiance Fields Stylization
Can Wang, Ruixiang Jiang, Menglei Chai, Mingming He, Dongdong Chen, Jing Liao
arXiv preprint, 15 Dec 2022
[arXiv] [Project] [Github] [Video]
PaletteNeRF: Palette-based Appearance Editing of Neural Radiance Fields
Zhengfei Kuang, Fujun Luan, Sai Bi, Zhixin Shu, Gordon Wetzstein, Kalyan Sunkavalli
arXiv preprint, 21 Dec 2022
[arXiv] [Project] [Github] [Video]
RecolorNeRF: Layer Decomposed Radiance Fields for Efficient Color Editing of 3D Scenes
Bingchen Gong, Yuehao Wang, Xiaoguang Han, Qi Dou
arXiv preprint, 19 Jan 2023
[arXiv] [Project] [Github] [Video]
🔥Interactive Geometry Editing of Neural Radiance Fields
Shaoxu Li, Ye Pan
I3D 2023, 21 Mar 2023
Abstract
In this paper, we propose a method that enables interactive geometry editing for neural radiance fields manipulation. We use two proxy cages(inner cage and outer cage) to edit a scene. The inner cage defines the operation target, and the outer cage defines the adjustment space. Various operations apply to the two cages. After cage selection, operations on the inner cage lead to the desired transformation of the inner cage and adjustment of the outer cage. Users can edit the scene with translation, rotation, scaling, or combinations. The operations on the corners and edges of the cage are also supported. Our method does not need any explicit 3D geometry representations. The interactive geometry editing applies directly to the implicit neural radiance fields. Extensive experimental results demonstrate the effectiveness of our approach.[arXiv] [Project] [Github] [Video] [Notes]
🔥Instruct-NeRF2NeRF: Editing 3D Scenes with Instructions
Ayaan Haque, Matthew Tancik, Alexei A. Efros, Aleksander Holynski, Angjoo Kanazawa
arXiv preprint, 22 Mar 2023
Abstract
We propose a method for editing NeRF scenes with text-instructions. Given a NeRF of a scene and the collection of images used to reconstruct it, our method uses an image-conditioned diffusion model (InstructPix2Pix) to iteratively edit the input images while optimizing the underlying scene, resulting in an optimized 3D scene that respects the edit instruction. We demonstrate that our proposed method is able to edit large-scale, real-world scenes, and is able to accomplish more realistic, targeted edits than prior work.[arXiv] [Project] [Github] [Video]
SINE: Semantic-driven Image-based NeRF Editing with Prior-guided Editing Field
Chong Bao, Yinda Zhang, Bangbang Yang, Tianxing Fan, Zesong Yang, Hujun Bao, Guofeng Zhang, Zhaopeng Cui
CVPR 2023, 23 Mar 2023
[arXiv] [Project] [Video]
PartNeRF: Generating Part-Aware Editable 3D Shapes without 3D Supervision
Konstantinos Tertikas, Despoina Paschalidou, Boxiao Pan, Jeong Joon Park, Mikaela Angelina Uy, Ioannis Emiris, Yannis Avrithis, Leonidas Guibas
CVPR 2023, 16 Mar 2023
[arXiv] [Project] [Github]
InpaintNeRF360: Text-Guided 3D Inpainting on Unbounded Neural Radiance Fields
Dongqing Wang, Tong Zhang, Alaa Abboud, Sabine Süsstrunk
arXiv preprint, 24 May 2023
[arXiv]
FusedRF: Fusing Multiple Radiance Fields
Rahul Goel, Dhawal Sirikonda, Rajvi Shah, PJ Narayanan
CVPR Workshop, 7 Jun 2023
[arXiv]
RePaint-NeRF: NeRF Editting via Semantic Masks and Diffusion Models
Xingchen Zhou, Ying He, F. Richard Yu, Jianqiang Li, You Li
IJCAI 2023, 9 Jun 2023
[arXiv] [Github]
Dyn-E: Local Appearance Editing of Dynamic Neural Radiance Fields
Shangzhan Zhang, Sida Peng, Yinji ShenTu, Qing Shuai, Tianrun Chen, Kaicheng Yu, Hujun Bao, Xiaowei Zhou
arXiv preprint, 24 Jul 2023
[arXiv] [Project]
Seal-3D: Interactive Pixel-Level Editing for Neural Radiance Fields
Xiangyu Wang, Jingsen Zhu, Qi Ye, Yuchi Huo, Yunlong Ran, Zhihua Zhong, Jiming Chen
ICCV 2023, 27 Jul 2023
[arXiv] [Project] [Github]
Mirror-NeRF: Learning Neural Radiance Fields for Mirrors with Whitted-Style Ray Tracing
Junyi Zeng, Chong Bao, Rui Chen, Zilong Dong, Guofeng Zhang, Hujun Bao, Zhaopeng Cui
ACMMM 2023, 7 Aug 2023
[arXiv] [Project]
Learning Unified Decompositional and Compositional NeRF for Editable Novel View Synthesis
Yuxin Wang, Wayne Wu, Dan Xu
ICCV 2023, 5 Aug 2023
[arXiv] [Project]
DeformToon3D: Deformable 3D Toonification from Neural Radiance Fields
Junzhe Zhang, Yushi Lan, Shuai Yang, Fangzhou Hong, Quan Wang, Chai Kiat Yeo, Ziwei Liu, Chen Change Loy
ICCV 2023, 8 Sep 2023
[arXiv] [Project] [Github]
Locally Stylized Neural Radiance Fields
Hong-Wing Pang, Binh-Son Hua, Sai-Kit Yeung
ICCV 2023, 19 Sep 2023
[arXiv]
Language-driven Object Fusion into Neural Radiance Fields with Pose-Conditioned Dataset Updates
Ka Chun Shum, Jaeyeon Kim, Binh-Son Hua, Duc Thanh Nguyen, Sai-Kit Yeung
arXiv preprint, 20 Sep 2023
[arXiv]
MM-NeRF: Multimodal-Guided 3D Multi-Style Transfer of Neural Radiance Field
Zijiang Yang, Zhongwei Qiu, Chang Xu, Dongmei Fu
arXiv preprint, 24 Sep 2023
[arXiv]
ED-NeRF: Efficient Text-Guided Editing of 3D Scene using Latent Space NeRF
Jangho Park, Gihyun Kwon, Jong Chul Ye
arXiv preprint, 4 Oct 2023
[arXiv]
Neural Impostor: Editing Neural Radiance Fields with Explicit Shape Manipulation
Ruiyang Liu, Jinxu Xiang, Bowen Zhao, Ran Zhang, Jingyi Yu, Changxi Zheng
Pacific Graphics 2023, 9 Oct 2023
[arXiv]
A Real-time Method for Inserting Virtual Objects into Neural Radiance Fields
Keyang Ye, Hongzhi Wu, Xin Tong, Kun Zhou
arXiv preprint, 9 Oct, 2023
[arXiv]
Customize your NeRF: Adaptive Source Driven 3D Scene Editing via Local-Global Iterative Training
Runze He, Shaofei Huang, Xuecheng Nie, Tianrui Hui, Luoqi Liu, Jiao Dai, Jizhong Han, Guanbin Li, Si Liu
arXiv preprint, 4 Dec 2023
[arXiv] [Project]
NeRFiller: Completing Scenes via Generative 3D Inpainting
Ethan Weber, Aleksander Hołyński, Varun Jampani, Saurabh Saxena, Noah Snavely, Abhishek Kar, Angjoo Kanazawa
CVPR 2024, 7 Dec, 2023
[arXiv] [Project] [Code]
Towards 4D Human Video Stylization
Tiantian Wang, Xinxin Zuo, Fangzhou Mu, Jian Wang, Ming-Hsuan Yang
arXiv preprint, 7 Dec, 2023
[arXiv]
InseRF: Text-Driven Generative Object Insertion in Neural 3D Scenes<b
Mohamad Shahbazi, Liesbeth Claessens, Michael Niemeyer, Edo Collins, Alessio Tonioni, Luc Van Gool, Federico Tombari
arXiv preprint, 10 Jan 2024
[arXiv]
Scaling Face Interaction Graph Networks to Real World Scenes
Tatiana Lopez-Guevara, Yulia Rubanova, William F. Whitney, Tobias Pfaff, Kimberly Stachenfeld, Kelsey R. Allen
arXiv preprint, 22 Jan 2024
[arXiv]
Exploration and Improvement of Nerf-based 3D Scene Editing Techniques
Shun Fang, Ming Cui, Xing Feng, Yanan Zhang
arXiv preprint, 23 Jan 2024
[arXiv]
ViCA-NeRF: View-Consistency-Aware 3D Editing of Neural Radiance Fields
Jiahua Dong, Yu-Xiong Wang
NeurIPS 2023, 1 Feb 2024
[arXiv] [[Code(https://github.com/Dongjiahua/VICA-NeRF)]]
Geometry Transfer for Stylizing Radiance Fields
Hyunyoung Jung, Seonghyeon Nam, Nikolaos Sarafianos, Sungjoo Yoo, Alexander Sorkine-Hornung, Rakesh Ranjan
CVPR 2024, 1 Feb 2024
[arXiv] [Project] [Code]
Consolidating Attention Features for Multi-view Image Editing
Or Patashnik, Rinon Gal, Daniel Cohen-Or, Jun-Yan Zhu, Fernando De la Torre
arXiv preprint, 22 Feb 2024
[arXiv] [Project]
SealD-NeRF: Interactive Pixel-Level Editing for Dynamic Scenes by Neural Radiance Fields
Zhentao Huang, Yukun Shi, Neil Bruce, Minglun Gong
arXiv preprint, 21 Feb 2024
[arXiv]
StyleDyRF: Zero-shot 4D Style Transfer for Dynamic Neural Radiance Fields
Hongbin Xu, Weitao Chen, Feng Xiao, Baigui Sun, Wenxiong Kang
arXiv preprint, 13 Mar 2024
[arXiv] [Code]
GenN2N: Generative NeRF2NeRF Translation
Xiangyue Liu, Han Xue, Kunming Luo, Ping Tan, Li Yi
CVPR 2024, 3 Apr 2024
[arXiv] [Project] [Code]
Freditor: High-Fidelity and Transferable NeRF Editing by Frequency Decomposition
Yisheng He, Weihao Yuan, Siyu Zhu, Zilong Dong, Liefeng Bo, Qixing Huang
arXiv preprint, 3 Apr 2024
[arXiv]
Taming Latent Diffusion Model for Neural Radiance Field Inpainting
Chieh Hubert Lin, Changil Kim, Jia-Bin Huang, Qinbo Li, Chih-Yao Ma, Johannes Kopf, Ming-Hsuan Yang, Hung-Yu Tseng
arXiv preprint, 15 Apr 2024
[arXiv] [Project]
Depth Priors in Removal Neural Radiance Fields
Zhihao Guo, Peng Wang
arXiv preprint, 1 May 2024
[arXiv]
NeRF-Insert: 3D Local Editing with Multimodal Control Signals
Benet Oriol Sabat, Alessandro Achille, Matthew Trager, Stefano Soatto
arXiv preprint, 30 Apr 2024
[arXiv]
MVIP-NeRF: Multi-view 3D Inpainting on NeRF Scenes via Diffusion Prior
Honghua Chen, Chen Change Loy, Xingang Pan
arXiv preprint, 5 May 2024
[arXiv]
Point Resampling and Ray Transformation Aid to Editable NeRF Models
Zhenyang Li, Zilong Chen, Feifan Qu, Mingqing Wang, Yizhou Zhao, Kai Zhang, Yifan Peng
arXiv preprint, 12 May 2024
[arXiv]
ExtraNeRF: Visibility-Aware View Extrapolation of Neural Radiance Fields with Diffusion Models
Meng-Li Shih, Wei-Chiu Ma, Lorenzo Boyice, Aleksander Holynski, Forrester Cole, Brian L. Curless, Janne Kontkanen
CVPR 2024, 10 Jun 2024
[arXiv]
NeRFDeformer: NeRF Transformation from a Single View via 3D Scene Flows
Zhenggang Tang, Zhongzheng Ren, Xiaoming Zhao, Bowen Wen, Jonathan Tremblay, Stan Birchfield, Alexander Schwing
CVPR 2024, 15 Jun 2024
[arXiv] [Code]
IE-NeRF: Inpainting Enhanced Neural Radiance Fields in the Wild
Shuaixian Wang, Haoran Xu, Yaokun Li, Jiwei Chen, Guang Tan
arXiv preprint, 15 Jul 2024
[arXiv]
Relighting Neural Radiance Fields With Shadow and Highlight Hints
Chong Zeng, Guojun Chen, Yue Dong, Pieter Peers, Hongzhi Wu, Xin Tong
SIGGRAPH 2023
[Paper] [Project] [Github]
Relighting Scenes with Object Insertions in Neural Radiance Fields
Xuening Zhu, Renjiao Yi, Xin Wen, Chenyang Zhu, Kai Xu
arXiv preprint, 21 Jun 2024
[arXiv]
Baking Relightable NeRF for Real-time Direct/Indirect Illumination Rendering
Euntae Choi, Vincent Carpentier, Seunghun Shin, Sungjoo Yoo
arXiv preprint, 16 Sep 2024
[arXiv]
NeuralUDF: Learning Unsigned Distance Fields for Multi-view Reconstruction of Surfaces with Arbitrary Topologies
Xiaoxiao Long, Cheng Lin, Lingjie Liu, Yuan Liu, Peng Wang, Christian Theobalt, Taku Komura, Wenping Wang
CVPR 2023, 25 Nov 2022
[arXiv] [Project] [Github] [Video]
NeAT: Learning Neural Implicit Surfaces with Arbitrary Topologies from Multi-view Images
Xiaoxu Meng, Weikai Chen, Bo Yang
CVPR 2023, 21 Mar 2023
[arXiv] [Project] [Github] [Video]
NeuralClothSim: Neural Deformation Fields Meet the Kirchhoff-Love Thin Shell Theory
Navami Kairanda, Marc Habermann, Christian Theobalt, Vladislav Golyanik
arXiv preprint, 24 Aug 2023
[arXiv] [Project] [Video]
SSDNeRF: Semantic Soft Decomposition of Neural Radiance Fields
Siddhant Ranade, Christoph Lassner, Kai Li, Christian Haene, Shen-Chi Chen, Jean-Charles Bazin, Sofien Bouaziz
arXiv preprint, 7 Dec 2022
[arXiv] [Project] [Video]
🔥Panoptic Lifting for 3D Scene Understanding with Neural Fields
Yawar Siddiqui, Lorenzo Porzi, Samuel Rota Buló, Norman Müller, Matthias Nießner, Angela Dai, Peter Kontschieder
CVPR 2023, 19 Dec 2022
Abstract
We propose Panoptic Lifting, a novel approach for learning panoptic 3D volumetric representations from images of in-the-wild scenes. Once trained, our model can render color images together with 3D-consistent panoptic segmentation from novel viewpoints. Unlike existing approaches which use 3D input directly or indirectly, our method requires only machine-generated 2D panoptic segmentation masks inferred from a pre-trained network. Our core contribution is a panoptic lifting scheme based on a neural field representation that generates a unified and multi-view consistent, 3D panoptic representation of the scene. To account for inconsistencies of 2D instance identifiers across views, we solve a linear assignment with a cost based on the model's current predictions and the machine-generated segmentation masks, thus enabling us to lift 2D instances to 3D in a consistent way. We further propose and ablate contributions that make our method more robust to noisy, machine-generated labels, including test-time augmentations for confidence estimates, segment consistency loss, bounded segmentation fields, and gradient stopping. Experimental results validate our approach on the challenging Hypersim, Replica, and ScanNet datasets, improving by 8.4, 13.8, and 10.6% in scene-level PQ over state of the art.[arXiv] [Project] [Github] [Video]
Segment Anything in 3D with NeRFs
Jiazhong Cen, Zanwei Zhou, Jiemin Fang, Wei Shen, Lingxi Xie, Dongsheng Jiang, Xiaopeng Zhang, Qi Tian
arXiv preprint, 24 Apr 2023
[arXiv] [Project] [Github] [Video]
Obj-NeRF: Extract Object NeRFs from Multi-view Images
Zhiyi Li, Lihe Ding, Tianfan Xue
arXiv preprint, 26 Nov, 2023
[arXiv]
SANeRF-HQ: Segment Anything for NeRF in High Quality
Yichen Liu, Benran Hu, Chi-Keung Tang, Yu-Wing Tai
arXiv preprint, 3 Dec 2023
[arXiv] [Project]
Slot-guided Volumetric Object Radiance Fields
Di Qi, Tong Yang, Xiangyu Zhang
NeurIPS 2023, 4 Jan 2024
[arXiv]
OpenNeRF: Open Set 3D Neural Scene Segmentation with Pixel-Wise Features and Rendered Novel Views
Francis Engelmann, Fabian Manhardt, Michael Niemeyer, Keisuke Tateno, Marc Pollefeys, Federico Tombari
ICLR 2024, 4 Apr 2024
[arXiv] [Project] [Code]
Rethinking Open-Vocabulary Segmentation of Radiance Fields in 3D Space
Hyunjee Lee, Youngsik Yun, Jeongmin Bae, Seoha Kim, Youngjung Uh
arXiv preprint, 14 Aug 2024
[arXiv] [Project]
DiscoNeRF: Class-Agnostic Object Field for 3D Object Discovery
Corentin Dumery, Aoxiang Fan, Ren Li, Nicolas Talabot, Pascal Fua
arXiv preprint, 19 Aug 2024
[arXiv]
Multi-modal NeRF Self-Supervision for LiDAR Semantic Segmentation
Xavier Timoneda, Markus Herb, Fabian Duerr, Daniel Goehring, Fisher Yu
IROS 2024, 5 Nov 2024
[arXiv]
When LLMs step into the 3D World: A Survey and Meta-Analysis of 3D Tasks via Multi-modal Large Language Models
Xianzheng Ma, Yash Bhalgat, Brandon Smart, Shuai Chen, Xinghui Li, Jian Ding, Jindong Gu, Dave Zhenyu Chen, Songyou Peng, Jia-Wang Bian, Philip H Torr, Marc Pollefeys, Matthias Nießner, Ian D Reid, Angel X. Chang, Iro Laina, Victor Adrian Prisacariu
arXiv preprint, 16 Mar 2022
[arXiv] [Code]
Exploring Multi-modal Neural Scene Representations With Applications on Thermal Imaging
Mert Özer, Maximilian Weiherer, Martin Hundhausen, Bernhard Egger
arXiv preprint, 18 Mar 2024
[arXiv] [Project]
Connecting NeRFs, Images, and Text
Francesco Ballerini, Pierluigi Zama Ramirez, Roberto Mirabella, Samuele Salti, Luigi Di Stefano
CVPRW-INRV 2024, 11 Apr 2024
[arXiv]
NeRF: Multi-Modal Decomposition NeRF with 3D Feature Fields
Ning Wang, Lefei Zhang, Angel X Chang
arXiv preprint, 8 May 2024
[arXiv]
Self-supervised Pre-training for Transferable Multi-modal Perception
Xiaohao Xu, Tianyi Zhang, Jinrong Yang, Matthew Johnson-Roberson, Xiaonan Huang
arXiv preprint, 28 May 2024
[arXiv]
uSF: Learning Neural Semantic Field with Uncertainty
Vsevolod Skorokhodov, Darya Drozdova, Dmitry Yudin
arXiv preprint, 13 Dec 2023
[arXiv] [Code]
GARField: Group Anything with Radiance Fields
Chung Min Kim, Mingxuan Wu, Justin Kerr, Ken Goldberg, Matthew Tancik, Angjoo Kanazawa
CVPR 2024, 17 Jan 2024
[arXiv] [Project] [Code]
OV-NeRF: Open-vocabulary Neural Radiance Fields with Vision and Language Foundation Models for 3D Semantic Understanding
Guibiao Liao, Kaichen Zhou, Zhenyu Bao, Kanglin Liu, Qing Li
avXiv preprint, 7 Feb 2024
[arXiv]
NeRF Analogies: Example-Based Visual Attribute Transfer for NeRFs
Michael Fischer, Zhengqin Li, Thu Nguyen-Phuoc, Aljaz Bozic, Zhao Dong, Carl Marshall, Tobias Ritschel
arXiv preprint, 13 Feb 2024
[arXiv]
GSNeRF: Generalizable Semantic Neural Radiance Fields with Enhanced 3D Scene Understanding
Zi-Ting Chou, Sheng-Yu Huang, I-Jieh Liu, Yu-Chiang Frank Wang
CVPR 2024, 6 Mar 2024
[arXiv]
Finding Waldo: Towards Efficient Exploration of NeRF Scene Space
Evangelos Skartados, Mehmet Kerim Yucel, Bruno Manganelli, Anastasios Drosou, Albert Saà-Garriga
ACM MMSys 24, 7 Mar 2024
[arXiv]
NeRF-Supervised Feature Point Detection and Description
*Ali Youssef, Francisco Vasconcelos
arXiv preprint, 13 Mar 2024
[arXiv]
Exploring 3D-aware Latent Spaces for Efficiently Learning Numerous Scenes
Antoine Schnepf, Karim Kassab, Jean-Yves Franceschi, Laurent Caraffa, Flavian Vasile, Jeremie Mary, Andrew Comport, Valérie Gouet-Brunet*
3DMV-CVPR 2024, 18 Mar 2024
[arXiv] [Project]
Semantic Is Enough: Only Semantic Information For NeRF Reconstruction
Ruibo Wang, Song Zhang, Ping Huang, Donghai Zhang, Wei Yan
arXiv preprint, 24 Mar 2024
[arXiv]
NeRF-MAE : Masked AutoEncoders for Self Supervised 3D representation Learning for Neural Radiance Fields
Muhammad Zubair Irshad, Sergey Zakahrov, Vitor Guizilini, Adrien Gaidon, Zsolt Kira, Rares Ambrus
CVPR Neural Rendering Intelligence Workshop, 2024, 1 Apr 2024
[arXiv] [Project]
NeRF-MAE: Masked AutoEncoders for Self-Supervised 3D Representation Learning for Neural Radiance Fields
Muhammad Zubair Irshad, Sergey Zakahrov, Vitor Guizilini, Adrien Gaidon, Zsolt Kira, Rares Ambrus
arXiv preprint, 1 Apr 2024
[arXiv] [Project]
NeRF-DetS: Enhancing Multi-View 3D Object Detection with Sampling-adaptive Network of Continuous NeRF-based Representation
Chi Huang, Xinyang Li, Shengchuan Zhang, Liujuan Cao, Rongrong Ji
arXiv preprint, 22 Apr 2024
[arXiv]
Gear-NeRF: Free-Viewpoint Rendering and Tracking with Motion-aware Spatio-Temporal Sampling
Xinhang Liu, Yu-Wing Tai, Chi-Keung Tang, Pedro Miraldo, Suhas Lohit, Moitreya Chatterjee
CVPR 2024, 6 Jun 2024
[arXiv] [Project]
OpenObj: Open-Vocabulary Object-Level Neural Radiance Fields with Fine-Grained Understanding
Yinan Deng, Jiahui Wang, Jingyu Zhao, Jianyu Dou, Yi Yang, Yufeng Yue
arXiv preprint, 12 Jun 2024
[arXiv] [Project] [Video]
Active Scout: Multi-Target Tracking Using Neural Radiance Fields in Dense Urban Environments
Christopher D. Hsu, Pratik Chaudhari
arXiv preprint, 11 Jun 2024
[arXiv]
DistillNeRF: Perceiving 3D Scenes from Single-Glance Images by Distilling Neural Fields and Foundation Model Features
Letian Wang, Seung Wook Kim, Jiawei Yang, Cunjun Yu, Boris Ivanovic, Steven L. Waslander, Yue Wang, Sanja Fidler, Marco Pavone, Peter Karkus
arXiv preprint, 17 Jun 2024
[arXiv]
LLaNA: Large Language and NeRF Assistant
Andrea Amaduzzi, Pierluigi Zama Ramirez, Giuseppe Lisanti, Samuele Salti, Luigi Di Stefano
arXiv preprint, 17 Jun 2024
[arXiv]
Learning with Noisy Ground Truth: From 2D Classification to 3D Reconstruction
Yangdi Lu, Wenbo He
arXiv preprint, 23 Jun 2024
[arXiv]
Fast and Efficient: Mask Neural Fields for 3D Scene Segmentation
Zihan Gao, Lingling Li, Licheng Jiao, Fang Liu, Xu Liu, Wenping Ma, Yuwei Guo, Shuyuan Yang
arXiv preprintm, 1 Jul 2024
[arXiv]
Improving 3D Finger Traits Recognition via Generalizable Neural Rendering
Hongbin Xu, Junduan Huang, Yuer Ma, Zifeng Li, Wenxiong Kang
IJCV 2024, 12 Oct 2024
[arXiv] [Project]
🔥Delicate Textured Mesh Recovery from NeRF via Adaptive Surface Refinement
Jiaxiang Tang, Hang Zhou, Xiaokang Chen, Tianshu Hu, Errui Ding, Jingdong Wang, Gang Zeng
arXiv preprint, 3 Mar 2023
Abstract
Neural Radiance Fields (NeRF) have constituted a remarkable breakthrough in image-based 3D reconstruction. However, their implicit volumetric representations differ significantly from the widely-adopted polygonal meshes and lack support from common 3D software and hardware, making their rendering and manipulation inefficient. To overcome this limitation, we present a novel framework that generates textured surface meshes from images. Our approach begins by efficiently initializing the geometry and view-dependency decomposed appearance with a NeRF. Subsequently, a coarse mesh is extracted, and an iterative surface refining algorithm is developed to adaptively adjust both vertex positions and face density based on re-projected rendering errors. We jointly refine the appearance with geometry and bake it into texture images for real-time rendering. Extensive experiments demonstrate that our method achieves superior mesh quality and competitive rendering quality.[arXiv/] [Project] [Github] [Video]
TeTriRF: Temporal Tri-Plane Radiance Fields for Efficient Free-Viewpoint Video
Minye Wu, Zehao Wang, Georgios Kouros, Tinne Tuytelaars
arXiv preprint, 10 Dec 2023
[arXiv]
R^2-Mesh: Reinforcement Learning Powered Mesh Reconstruction via Geometry and Appearance Refinement
Haoyang Wang, Liming Liu, Quanlu Jia, Jiangkai Wu, Haodan Zhang, Peiheng Wang, Xinggong Zhang
arXiv preprint, 19 Aug 2024
[arXiv]
Learning Neural Volumetric Field for Point Cloud Geometry Compression
Yueyu Hu, Yao Wang
PCS 2022, 11 Dec 2022
[arXiv] [Project] [Github]
Towards Scalable Neural Representation for Diverse Videos
Bo He, Xitong Yang, Hanyu Wang, Zuxuan Wu, Hao Chen, Shuaiyi Huang, Yixuan Ren, Ser-Nam Lim, Abhinav Shrivastava
CVPR 2023, 24 Mar 2023
[arXiv] [Project] [Github]
🔥Neural Residual Radiance Fields for Streamably Free-Viewpoint Videos
Liao Wang, Qiang Hu, Qihan He, Ziyu Wang, Jingyi Yu, Tinne Tuytelaars, Lan Xu, Minye Wu
CVPR 2023, 11 Dec 2022
Abstract
The success of the Neural Radiance Fields (NeRFs) for modeling and free-view rendering static objects has inspired numerous attempts on dynamic scenes. Current techniques that utilize neural rendering for facilitating free-view videos (FVVs) are restricted to either offline rendering or are capable of processing only brief sequences with minimal motion. In this paper, we present a novel technique, Residual Radiance Field or ReRF, as a highly compact neural representation to achieve real-time FVV rendering on long-duration dynamic scenes. ReRF explicitly models the residual information between adjacent timestamps in the spatial-temporal feature space, with a global coordinate-based tiny MLP as the feature decoder. Specifically, ReRF employs a compact motion grid along with a residual feature grid to exploit inter-frame feature similarities. We show such a strategy can handle large motions without sacrificing quality. We further present a sequential training scheme to maintain the smoothness and the sparsity of the motion/residual grids. Based on ReRF, we design a special FVV codec that achieves three orders of magnitudes compression rate and provides a companion ReRF player to support online streaming of long-duration FVVs of dynamic scenes. Extensive experiments demonstrate the effectiveness of ReRF for compactly representing dynamic radiance fields, enabling an unprecedented free-viewpoint viewing experience in speed and quality.VideoRF: Rendering Dynamic Radiance Fields as 2D Feature Video Streams
Liao Wang, Kaixin Yao, Chengcheng Guo, Zhirui Zhang, Qiang Hu, Jingyi Yu, Lan Xu, Minye Wu
arxiv preprint, 3 Dec 2023
[arXiv] [Project]
🔥SlimmeRF: Slimmable Radiance Fields
Shiran Yuan, Hao Zhao
3DV 2024, 15 Dec 2023
Abstract
Neural Radiance Field (NeRF) and its variants have recently emerged as successful methods for novel view synthesis and 3D scene reconstruction. However, most current NeRF models either achieve high accuracy using large model sizes, or achieve high memory-efficiency by trading off accuracy. This limits the applicable scope of any single model, since high-accuracy models might not fit in low-memory devices, and memory-efficient models might not satisfy high-quality requirements. To this end, we present SlimmeRF, a model that allows for instant test-time trade-offs between model size and accuracy through slimming, thus making the model simultaneously suitable for scenarios with different computing budgets. We achieve this through a newly proposed algorithm named Tensorial Rank Incrementation (TRaIn) which increases the rank of the model's tensorial representation gradually during training. We also observe that our model allows for more effective trade-offs in sparse-view scenarios, at times even achieving higher accuracy after being slimmed. We credit this to the fact that erroneous information such as floaters tend to be stored in components corresponding to higher ranks. Our implementation is available at this https URL.Efficient Dynamic-NeRF Based Volumetric Video Coding with Rate Distortion Optimization
Zhiyu Zhang, Guo Lu, Huanxiong Liang, Anni Tang, Qiang Hu, Li Song
arXiv preprint, 2 Feb 2024
[arXiv]
One is All: Bridging the Gap Between Neural Radiance Fields Architectures with Progressive Volume Distillation
Shuangkang Fang, Weixin Xu, Heng Wang, Yi Yang, Yufeng Wang, Shuchang Zhou
AAAI 2023, 29 Nov 2022
[arXiv] [Project] [Github] [PVD-AL Github]
NeRFs to Gaussian Splats, and Back
Siming He, Zach Osman, Pratik Chaudhari
arXiv preprint, 15 May 2024
[arXiv] [Code]
MedNeRF: Medical Neural Radiance Fields for Reconstructing 3D-aware CT-Projections from a Single X-ray
Abril Corona-Figueroa, Jonathan Frawley, Sam Bond-Taylor, Sarath Bethapudi, Hubert P. H. Shum, Chris G. Willcocks
EMBC 2022, 2 Feb 2022
[arXiv] [Github]
NeAT: Neural Adaptive Tomography
Darius Rückert, Yuanhao Wang, Rui Li, Ramzi Idoughi, Wolfgang Heidrich
arXiv preprint, 4 Feb, 2022
[arXiv]
NAF: Neural Attenuation Fields for Sparse-View CBCT Reconstruction
Ruyi Zha, Yanhao Zhang, Hongdong Li
MICCAI 2022, 29 Sep 2022
[arXiv]
SNAF: Sparse-view CBCT Reconstruction with Neural Attenuation Fields
Yu Fang, Lanzhuju Mei, Changjian Li, Yuan Liu, Wenping Wang, Zhiming Cui, Dinggang Shen
arXiv preprint, 30 Nov 2022
[arXiv]
Learning Deep Intensity Field for Extremely Sparse-View CBCT Reconstruction
Yiqun Lin, Zhongjin Luo, Wei Zhao, Xiaomeng Li
arXiv preprint, 12 Mar 2023
[arXiv]
Geometry-Aware Attenuation Field Learning for Sparse-View CBCT Reconstruction
Zhentao Liu, Yu Fang, Changjian Li, Han Wu, Yuan Liu, Zhiming Cui, Dinggang Shen
arXiv preprint, 26 Mar, 2023
[arXiv]
ColonNeRF: Neural Radiance Fields for High-Fidelity Long-Sequence Colonoscopy Reconstruction
Yufei Shi, Beijia Lu, Jia-Wei Liu, Ming Li, Mike Zheng Shou
arXiv preprint, 4 Dec 2023
[arXiv] [Project]
Efficient Deformable Tissue Reconstruction via Orthogonal Neural Plane
Chen Yang, Kailing Wang, Yuehao Wang, Qi Dou, Xiaokang Yang, Wei Shen
MICCI 2024, 23 Dec 2023
[arXiv] [Code]
BioNeRF: Biologically Plausible Neural Radiance Fields for View Synthesis
Leandro A. Passos, Douglas Rodrigues, Danilo Jodas, Kelton A. P. Costa, João Paulo Papa
arXiv preprint, 11 Feb 2024
[arXiv]
NeRF Solves Undersampled MRI Reconstruction
Tae Jun Jang, Chang Min Hyun
arXiv preprint, 20 Feb 2024
[arXiv]
FLex: Joint Pose and Dynamic Radiance Fields Optimization for Stereo Endoscopic Videos
Florian Philipp Stilz, Mert Asim Karaoglu, Felix Tristram, Nassir Navab, Benjamin Busam, Alexander Ladikos
arXiv preprint, 18 Mar 2024
[arXiv]
High-fidelity Endoscopic Image Synthesis by Utilizing Depth-guided Neural Surfaces
Baoru Huang, Yida Wang, Anh Nguyen, Daniel Elson, Francisco Vasconcelos, Danail Stoyanov
arXiv preprint, 20 Apr 2024
[arXiv]
DPER: Diffusion Prior Driven Neural Representation for Limited Angle and Sparse View CT Reconstruction
Chenhe Du, Xiyue Lin, Qing Wu, Xuanyu Tian, Ying Su, Zhe Luo, Hongjiang Wei, S. Kevin Zhou, Jingyi Yu, Yuyao Zhang
arXiv preprint, 27 Apr 2024
[arXiv]
3D Vessel Reconstruction from Sparse-View Dynamic DSA Images via Vessel Probability Guided Attenuation Learning
Zhentao Liu, Huangxuan Zhao, Wenhui Qin, Zhenghong Zhou, Xinggang Wang, Wenping Wang, Xiaochun Lai, Chuansheng Zheng, Dinggang Shen, Zhiming Cui
arXiv preprint, 17 May 2024
[arXiv]
Neural Radiance Fields for Novel View Synthesis in Monocular Gastroscopy
Zijie Jiang, Yusuke Monno, Masatoshi Okutomi, Sho Suzuki, Kenji Miki
EMBC 2024, 29 May 2024
[arXiv]
Shorter SPECT Scans Using Self-supervised Coordinate Learning to Synthesize Skipped Projection Views
Zongyu Li, Yixuan Jia, Xiaojian Xu, Jason Hu, Jeffrey A. Fessler, Yuni K. Dewaraja
arXiv preprint, 27 Jun 2024
[arXiv]
3D Reconstruction of Protein Structures from Multi-view AFM Images using Neural Radiance Fields (NeRFs)
Jaydeep Rade, Ethan Herron, Soumik Sarkar, Anwesha Sarkar, Adarsh Krishnamurthy
arXiv preprint, 12 Aug 2024
[arXiv]
NeRF-US: Removing Ultrasound Imaging Artifacts from Neural Radiance Fields in the Wild
Rishit Dagli, Atsuhiro Hibi, Rahul G. Krishnan, Pascal N. Tyrrell
arXiv preprint, 13 Aug 2024
[arXiv]
NeRF-CA: Dynamic Reconstruction of X-ray Coronary Angiography with Extremely Sparse-views
Kirsten W.H. Maas, Danny Ruijters, Anna Vilanova, Nicola Pezzotti
arXiv preprint, 29 Aug 2024
[arXiv]
UC-NeRF: Uncertainty-aware Conditional Neural Radiance Fields from Endoscopic Sparse Views
Jiaxin Guo, Jiangliu Wang, Ruofeng Wei, Di Kang, Qi Dou, Yun-hui Liu
arXiv preprint, 4 Sep 2024
[arXiv]
Intraoperative Registration by Cross-Modal Inverse Neural Rendering
Maximilian Fehrentz, Mohammad Farid Azampour, Reuben Dorent, Hassan Rasheed, Colin Galvin, Alexandra Golby, William M. Wells, Sarah Frisken, Nassir Navab, Nazim Haouchine
arXiv preprint, 18 Sep 2024
[arXiv] [Project]
HybridNeRF-Stereo Vision: Pioneering Depth Estimation and 3D Reconstruction in Endoscopy
Pengcheng Chen, Wenhao Li, Nicole Gunderson, Jeremy Ruthberg, Randall Bly, Waleed M. Abuzeid, Zhenglong Sun, Eric J. Seibel
arXiv preprint, 10 Oct 2024
[arXiv]
Neural rendering enables dynamic tomography
Ivan Grega, William F. Whitney, Vikram S. Deshpande
NeurIPS 2024, 27 Oct 2024
[arXiv] [Project]
🔥Multi-view Inverse Rendering for Large-scale Real-world Indoor Scenes
Zhen Li, Lingli Wang, Mofang Cheng, Cihui Pan, Jiaqi Yang
CVPR 2023, 18 Nov 2022
Abstract
We present a efficient multi-view inverse rendering method for large-scale real-world indoor scenes that reconstructs global illumination and physically-reasonable SVBRDFs. Unlike previous representations, where the global illumination of large scenes is simplified as multiple environment maps, we propose a compact representation called Texture-based Lighting (TBL). It consists of 3D mesh and HDR textures, and efficiently models direct and infinite-bounce indirect lighting of the entire large scene. Based on TBL, we further propose a hybrid lighting representation with precomputed irradiance, which significantly improves the efficiency and alleviates the rendering noise in the material optimization. To physically disentangle the ambiguity between materials, we propose a three-stage material optimization strategy based on the priors of semantic segmentation and room segmentation. Extensive experiments show that the proposed method outperforms the state-of-the-art quantitatively and qualitatively, and enables physically-reasonable mixed-reality applications such as material editing, editable novel view synthesis and relighting. The project page is at this https URL.[arXiv] [Project] [Github] [Video]
🔥TensoIR: Tensorial Inverse Rendering
Haian Jin, Isabella Liu, Peijia Xu, Xiaoshuai Zhang, Songfang Han, Sai Bi, Xiaowei Zhou, Zexiang Xu, Hao Su
CVPR 2023, 24 Apr 2023
Abstract
We propose TensoIR, a novel inverse rendering approach based on tensor factorization and neural fields. Unlike previous works that use purely MLP-based neural fields, thus suffering from low capacity and high computation costs, we extend TensoRF, a state-of-the-art approach for radiance field modeling, to estimate scene geometry, surface reflectance, and environment illumination from multi-view images captured under unknown lighting conditions. Our approach jointly achieves radiance field reconstruction and physically-based model estimation, leading to photo-realistic novel view synthesis and relighting results. Benefiting from the efficiency and extensibility of the TensoRF-based representation, our method can accurately model secondary shading effects (like shadows and indirect lighting) and generally support input images captured under single or multiple unknown lighting conditions. The low-rank tensor representation allows us to not only achieve fast and compact reconstruction but also better exploit shared information under an arbitrary number of capturing lighting conditions. We demonstrate the superiority of our method to baseline methods qualitatively and quantitatively on various challenging synthetic and real-world scenes.[arXiv] [Project] [Github] [Video]
🔥Inverse Global Illumination using a Neural Radiometric Prior
Saeed Hadadan, Geng Lin, Jan Novák, Fabrice Rousselle, Matthias Zwicker
SIGGRAPH 2023, 3 May 2023
Abstract
Inverse rendering methods that account for global illumination are becoming more popular, but current methods require evaluating and automatically differentiating millions of path integrals by tracing multiple light bounces, which remains expensive and prone to noise. Instead, this paper proposes a radiometric prior as a simple alternative to building complete path integrals in a traditional differentiable path tracer, while still correctly accounting for global illumination. Inspired by the Neural Radiosity technique, we use a neural network as a radiance function, and we introduce a prior consisting of the norm of the residual of the rendering equation in the inverse rendering loss. We train our radiance network and optimize scene parameters simultaneously using a loss consisting of both a photometric term between renderings and the multi-view input images, and our radiometric prior (the residual term). This residual term enforces a physical constraint on the optimization that ensures that the radiance field accounts for global illumination. We compare our method to a vanilla differentiable path tracer, and more advanced techniques such as Path Replay Backpropagation. Despite the simplicity of our approach, we can recover scene parameters with comparable and in some cases better quality, at considerably lower computation times.SplitNeRF: Split Sum Approximation Neural Field for Joint Geometry, Illumination, and Material Estimation
Jesus Zarzar, Bernard Ghanem
arXiv preprint, 28 Nov 2023
[arXiv]
NeRF as Non-Distant Environment Emitter in Physics-based Inverse Rendering
Jingwang Ling, Ruihan Yu, Feng Xu, Chun Du, Shuang Zhao
SIGGRAPH 2024, 7 Feb 2024
[arXiv] [Project]
Inverse Rendering of Glossy Objects via the Neural Plenoptic Function and Radiance Fields
Haoyuan Wang, Wenbo Hu, Lei Zhu, Rynson W. H. Lau
CVPR 2024, 24 Mar 2024
[arXiv] [Project]
IntrinsicAnything: Learning Diffusion Priors for Inverse Rendering Under Unknown Illumination
Xi Chen, Sida Peng, Dongchen Yang, Yuan Liu, Bowen Pan, Chengfei Lv, Xiaowei Zhou
arXiv preprint, 17 Apr 2024
[arXiv] [Project]
MIRReS: Multi-bounce Inverse Rendering using Reservoir Sampling
Yuxin Dai, Qi Wang, Jingsen Zhu, Dianbing Xi, Yuchi Huo, Chen Qian, Ying He
arXiv preprint, 24 Jun 2024
[arXiv] [Project] [Video]
RRM: Relightable assets using Radiance guided Material extraction
Diego Gomez, Julien Philip, Adrien Kaiser, Élie Michel
CGI 2024, 8 Jul 2024
[arXiv]
Material Transforms from Disentangled NeRF Representations
Ivan Lopes, Jean-François Lalonde, Raoul de Charette
arXiv preprint, 12 Nov 2024
[arXiv] [Code]
NeRF-Tex: Neural Reflectance Field Textures
Hendrik Baatz, Jonathan Granskog, Marios Papas, Fabrice Rousselle, Jan Novák
EGSR 2021, 22 Jun, 2021
[Paper] [Project]
NeRF-Texture: Texture Synthesis With Neural Radiance Fields
Yihua Huang, Yan-Pei Cao, Yu-Kun Lai, Ying Shan, Lin Gao
SIGGRAPH 2023
[Video]
NeRF in Robotics: A Survey
Guangming Wang, Lei Pan, Songyou Peng, Shaohui Liu, Chenfeng Xu, Yanzi Miao, Wei Zhan, Masayoshi Tomizuka, Marc Pollefeys, Hesheng Wang
arXiv preprint, 2 May 2024
[arXiv]
Benchmarking Neural Radiance Fields for Autonomous Robots: An Overview
Yuhang Ming, Xingrui Yang, Weihan Wang, Zheng Chen, Jinglun Feng, Yifan Xing, Guofeng Zhang
arXiv preprint, 9 May 2024
[arXiv]
A Survey on Occupancy Perception for Autonomous Driving: The Information Fusion Perspective
Huaiyuan Xu, Junliang Chen, Shiyu Meng, Yi Wang, Lap-Pui Chau
arXiv preprint, 8 May 2024
[arXiv]
MACS: Mass Conditioned 3D Hand and Object Motion Synthesis
Soshi Shimada, Franziska Mueller, Jan Bednarik, Bardia Doosti, Bernd Bickel, Danhang Tang, Vladislav Golyanik, Jonathan Taylor, Christian Theobalt, Thabo Beeler
arXiv preprint, 22 Dec 2023
[arXiv]
Fit-NGP: Fitting Object Models to Neural Graphics Primitives
Marwan Taher, Ignacio Alzugaray, Andrew J. Davison
arXiv preprint, 4 Jan 2024
[arXiv]
6-DoF Grasp Pose Evaluation and Optimization via Transfer Learning from NeRFs
Gergely Sóti, Xi Huang, Christian Wurll, Björn Hein
arXiv preprint, 15 Jan 2024
[arXiv] [Project]
Physical Priors Augmented Event-Based 3D Reconstruction
Jiaxu Wang, Junhao He, Ziyi Zhang, Renjing Xu
ICRA 2024, 30 Jan 2024
[arXiv] [Code]
Di-NeRF: Distributed NeRF for Collaborative Learning with Unknown Relative Poses
Mahboubeh Asadi, Kourosh Zareinia, Sajad Saeedi
arXiv preprint, 2 Feb 2024
[arXiv] [Project]
Reg-NF: Efficient Registration of Implicit Surfaces within Neural Fields
Stephen Hausler, David Hall, Sutharsan Mahendren, Peyman Moghadam
ICRA 2024, 15 Feb 2024
[arXiv]
DeformNet: Latent Space Modeling and Dynamics Prediction for Deformable Object Manipulation
Chenchang Li, Zihao Ai, Tong Wu, Xiaosa Li, Wenbo Ding, Huazhe Xu
ICRA 2024, 12 Feb 2024
[arXiv]
Closing the Visual Sim-to-Real Gap with Object-Composable NeRFs
Nikhil Mishra, Maximilian Sieb, Pieter Abbeel, Xi Chen
ICRA 2024, 7 Mar 2024
[arXiv]
SiLVR: Scalable Lidar-Visual Reconstruction with Neural Radiance Fields for Robotic Inspection
Yifu Tao, Yash Bhalgat, Lanke Frank Tarimo Fu, Matias Mattamala, Nived Chebrolu, Maurice Fallon
ICRA 2024, 11 Mar 2024
[arXiv] [Project] [Video]
MULAN-WC: Multi-Robot Localization Uncertainty-aware Active NeRF with Wireless Coordination>
Weiying Wang, Victor Cai, Stephanie Gil
arXiv preprint, 20 Mar 2024
[arXiv]
NVINS: Robust Visual Inertial Navigation Fused with NeRF-augmented Camera Pose Regressor and Uncertainty Quantification
Juyeop Han, Lukas Lao Beyer, Guilherme V. Cavalheiro, Sertac Karaman
arXiv preprint, 1 Apr 2024
[arXiv]
Uncertainty-aware Active Learning of NeRF-based Object Models for Robot Manipulators using Visual and Re-orientation Actions
Saptarshi Dasgupta, Akshat Gupta, Shreshth Tuli, Rohan Paul
arXiv preprint, 2 Apr 2024
[arXiv]
NeRF-Guided Unsupervised Learning of RGB-D Registration
Zhinan Yu, Zheng Qin, Yijie Tang, Yongjun Wang, Renjiao Yi, Chenyang Zhu, Kai Xu
arXiv preprint, 1 May 2024
[arXiv]
Novel View Synthesis with Neural Radiance Fields for Industrial Robot Applications
Markus Hillemann, Robert Langendörfer, Max Heiken, Max Mehltretter, Andreas Schenk, Martin Weinmann, Stefan Hinz, Christian Heipke, Markus Ulrich
arXiv preprint, 7 May 2024
[arXiv]
Neural Visibility Field for Uncertainty-Driven Active Mapping
Shangjie Xue, Jesse Dill, Pranay Mathur, Frank Dellaert, Panagiotis Tsiotra, Danfei Xu
CVPR 2024, 11 Jun 2024
[arXiv] [Project]
dGrasp:NeRF-Informed Implicit Grasp Policies with Supervised Optimization Slopes
*Gergely Sóti, Xi Huang, Christian Wurll, Gergely Sóti
arXiv preprint, 14 Jun 2024
[arXiv]
Articulate your NeRF: Unsupervised articulated object modeling via conditional view synthesis
Jianning Deng, Kartic Subr, Hakan Bilen
arXiv preprint, 24 Jun 2024
[arXiv]
GeoNLF: Geometry guided Pose-Free Neural LiDAR Fields
Weiyi Xue, Zehan Zheng, Fan Lu, Haiyun Wei, Guang Chen, Changjun Jiang
arXiv preprint, 8 Jul 2024
[arXiv]
AirNeRF: 3D Reconstruction of Human with Drone and NeRF for Future Communication Systems
Alexey Kotcov, Maria Dronova, Vladislav Cheremnykh, Sausar Karaf, Dzmitry Tsetserukou
arXiv preprint, 15 Jul 2024
[arXiv]
LEIA: Latent View-invariant Embeddings for Implicit 3D Articulation
Archana Swaminathan, Anubhav Gupta, Kamal Gupta, Shishira R. Maiya, Vatsal Agarwal, Abhinav Shrivastava
ECCV 2024, 10 Sep 2024
[arXiv] [Project] [Code]
From Words to Poses: Enhancing Novel Object Pose Estimation with Vision Language Models
Tessa Pulli, Stefan Thalhammer, Simon Schwaiger, Markus Vincze
arXiv preprint, 9 Sep 2024
[arXiv]
NARF24: Estimating Articulated Object Structure for Implicit Rendering
Stanley Lewis, Tom Gao, Odest Chadwicke Jenkins
ICRA 2024, 15 Sep 2024
[arXiv]
Active Neural Mapping at Scale
Zijia Kuang, Zike Yan, Hao Zhao, Guyue Zhou, Hongbin Zha
arXiv preprint, 30 Sep 2024
[arXiv]
Distributed NeRF Learning for Collaborative Multi-Robot Perception
Hongrui Zhao, Boris Ivanovic, Negar Mehr
arXiv preprint, 30 Sep 2024
[arXiv]
Enhancing Exploratory Capability of Visual Navigation Using Uncertainty of Implicit Scene Representation
Yichen Wang, Qiming Liu, Zhe Liu, Hesheng Wang
arXiv preprint, 5 Nov 2024
[arXiv]
NeRF-Aug: Data Augmentation for Robotics with Neural Radiance Fields<br<
Eric Zhu, Mara Levy, Matthew Gwilliam, Abhinav Shrivastava
arXiv preprint, 4 Nov 2024
[arXiv] [Project]
NeRRF: 3D Reconstruction and View Synthesis for Transparent and Specular Objects with Neural Refractive-Reflective Fields
Xiaoxue Chen, Junchen Liu, Hao Zhao, Guyue Zhou, Ya-Qin Zhang
arXiv preprint, 22 Sep 2023
[arXiv]
Neural Radiance Fields for Transparent Object Using Visual Hull
Heechan Yoon, Seungkyu Lee
arXiv preprint, 13 Dec 2023
[arXiv]
TraM-NeRF: Tracing Mirror and Near-Perfect Specular Reflections through Neural Radiance Fields
Leif Van Holland, Ruben Bliersbach, Jan U. Müller, Patrick Stotko, Reinhard Klein
arXiv preprint, 16 Oct 2023
[arXiv]
SpecNeRF: Gaussian Directional Encoding for Specular Reflections
Li Ma, Vasu Agrawal, Haithem Turki, Changil Kim, Chen Gao, Pedro Sander, Michael Zollhöfer, Christian Richardt
CVPR 2024, 20 Dec 2023
[arXiv] [Project] [Video]
UniSDF: Unifying Neural Representations for High-Fidelity 3D Reconstruction of Complex Scenes with Reflections
Fangjinhua Wang, Marie-Julie Rakotosaona, Michael Niemeyer, Richard Szeliski, Marc Pollefeys, Federico Tombari
arXiv preprint, 20 Dec 2023
[arXiv] [Project]
GNeRP: Gaussian-guided Neural Reconstruction of Reflective Objects with Noisy Polarization Priors
LI Yang, WU Ruizheng, LI Jiyong, CHEN Ying-cong
ICLR 2024, 18 Mar 2024
[arXiv] [Project] [Code]
🔥REFRAME: Reflective Surface Real-Time Rendering for Mobile Devices
Chaojie Ji, Yufeng Li, Yiyi Liao
arXiv preprint, 25 Mar 2024
Abstract
This work tackles the challenging task of achieving real-time novel view synthesis for reflective surfaces across various scenes. Existing real-time rendering methods, especially those based on meshes, often have subpar performance in modeling surfaces with rich view-dependent appearances. Our key idea lies in leveraging meshes for rendering acceleration while incorporating a novel approach to parameterize view-dependent information. We decompose the color into diffuse and specular, and model the specular color in the reflected direction based on a neural environment map. Our experiments demonstrate that our method achieves comparable reconstruction quality for highly reflective surfaces compared to state-of-the-art offline methods, while also efficiently enabling real-time rendering on edge devices such as smartphones.SAID-NeRF: Segmentation-AIDed NeRF for Depth Completion of Transparent Objects
Avinash Ummadisingu, Jongkeum Choi, Koki Yamane, Shimpei Masuda, Naoki Fukaya, Kuniyuki Takahashi
arXiv preprint, 28 Mar 2024
[arXiv] [Video]
Residual-NeRF: Learning Residual NeRFs for Transparent Object Manipulation
Bardienus P. Duisterhof, Yuemin Mao, Si Heng Teng, Jeffrey Ichnowski
arXiv preprint, 10 May 2024
[arXiv]
NeRF-Casting: Improved View-Dependent Appearance with Consistent Reflections
Dor Verbin, Pratul P. Srinivasan, Peter Hedman, Ben Mildenhall, Benjamin Attal, Richard Szeliski, Jonathan T. Barron
arXiv preprint, 23 May 2024
[arXiv] [Project]
Neural Directional Encoding for Efficient and Accurate View-Dependent Appearance Modeling
Liwen Wu, Sai Bi, Zexiang Xu, Fujun Luan, Kai Zhang, Iliyan Georgiev, Kalyan Sunkavalli, Ravi Ramamoorthi
CVPR 2024 , 23 May 2024
[arXiv] [Project] [Code]
Planar Reflection-Aware Neural Radiance Fields
Chen Gao, Yipeng Wang, Changil Kim, Jia-Bin Huang, Johannes Kopf
arXiv preprint, 7 Nov 2024
[arXiv]
🔥Patch-based 3D Natural Scene Generation from a Single Example
Weiyu Li, Xuelin Chen, Jue Wang, Baoquan Chen
CVPR 2023, 25 Apr 2023
Abstract
We target a 3D generative model for general natural scenes that are typically unique and intricate. Lacking the necessary volumes of training data, along with the difficulties of having ad hoc designs in presence of varying scene characteristics, renders existing setups intractable. Inspired by classical patch-based image models, we advocate for synthesizing 3D scenes at the patch level, given a single example. At the core of this work lies important algorithmic designs w.r.t the scene representation and generative patch nearest-neighbor module, that address unique challenges arising from lifting classical 2D patch-based framework to 3D generation. These design choices, on a collective level, contribute to a robust, effective, and efficient model that can generate high-quality general natural scenes with both realistic geometric structure and visual appearance, in large quantities and varieties, as demonstrated upon a variety of exemplar scenes.[arXiv] [Project] [Github] [Notes]
ORCa: Glossy Objects as Radiance Field Cameras
Kushagra Tiwary, Akshat Dave, Nikhil Behari, Tzofi Klinghoffer, Ashok Veeraraghavan, Ramesh Raskar
arXiv preprint, 8 Dec 2022
[arXiv] [Projects] [Video]
WaterNeRF: Neural Radiance Fields for Underwater Scenes
Advaith Venkatramanan Sethuraman, Manikandasriram Srinivasan Ramanagopal, Katherine A. Skinner
arXiv preprint, 27 Sep 2022
[arXiv]
Virtual Pets: Animatable Animal Generation in 3D Scenes
Yen-Chi Cheng, Chieh Hubert Lin, Chaoyang Wang, Yash Kant, Sergey Tulyakov, Alexander Schwing, Liangyan Gui, Hsin-Ying Lee
arXiv preprint, 21 Dec 2023
[arXiv] [Project]
A Deep Learning Framework for Wireless Radiation Field Reconstruction and Channel Prediction
Haofan Lu, Christopher Vattheuer, Baharan Mirzasoleiman, Omid Abari
arXiv preprint, 5 Mar 2023
[arXiv]
Neural radiance fields-based holography
Minsung Kang, Fan Wang, Kai Kumano, Tomoyoshi Ito, Tomoyoshi Shimobaba
arXiv preprint, 2 Mar 2023
[arXiv]
Leveraging Neural Radiance Field in Descriptor Synthesis for Keypoints Scene Coordinate Regression
Huy-Hoang Bui, Bach-Thuan Bui, Dinh-Tuan Tran, Joo-Ho Lee
arXiv preprint, 15 Mar 2024
[arXiv]
ThermoNeRF: Multimodal Neural Radiance Fields for Thermal Novel View Synthesis
Mariam Hassan, Florent Forest, Olga Fink, Malcolm Mielle
arXiv preprint, 18 Mar 2024
[arXiv]
Exploring Accurate 3D Phenotyping in Greenhouse through Neural Radiance Fields
Junhong Zhao, Wei Ying, Yaoqiang Pan, Zhenfeng Yi, Chao Chen, Kewei Hu, Hanwen Kang
arXiv preprint, 24 Mar 2024
[arXiv]
Blending Distributed NeRFs with Tri-stage Robust Pose Optimization
Baijun Ye, Caiyun Liu, Xiaoyu Ye, Yuantao Chen, Yuhai Wang, Zike Yan, Yongliang Shi, Hao Zhao, Guyue Zhou
arXiv preprint, 5 May 2024
[arXiv]
R-NeRF: Neural Radiance Fields for Modeling RIS-enabled Wireless Environments
Huiying Yang, Zihan Jin, Chenhao Wu, Rujing Xiong, Robert Caiming Qiu, Zenan Ling
arXiv preprint, 19 May 2024
[arXiv]
Bayesian uncertainty analysis for underwater 3D reconstruction with neural radiance fields
Haojie Lian, Xinhao Li, Yilin Qu, Jing Du, Zhuxuan Meng, Jie Liu, Leilei Chen
arXiv preprint, 11 Jul 2024
[arXiv]
Physics-Informed Learning of Characteristic Trajectories for Smoke Reconstruction
Yiming Wang, Siyu Tang, Mengyu Chu
SIGGRAPH 2024, 12 Jul 2024
[arXiv] [Project] [Video] [Code]
Feasibility of Neural Radiance Fields for Crime Scene Video Reconstruction
Shariq Nadeem Malik, Min Hao Chee, Dayan Mario Anthony Perera, Chern Hong Lim
arXiv preprint, 11 Jul 2024
[arXiv]
PanicleNeRF: low-cost, high-precision in-field phenotyping of rice panicles with smartphone
Xin Yang, Xuqi Lu, Pengyao Xie, Ziyue Guo, Hui Fang, Haowei Fu, Xiaochun Hu, Zhenbiao Sun, Haiyan Cen
arXiv preprint, 4 Aug 2024
[arXiv]
AgriNeRF: Neural Radiance Fields for Agriculture in Challenging Lighting Conditions
Samarth Chopra, Fernando Cladera, Varun Murali, Vijay Kumar
arXiv preprint, 23 Sep 2024
[arXiv]
NeRF-Accelerated Ecological Monitoring in Mixed-Evergreen Redwood Forest
Adam Korycki, Cory Yeaton, Gregory S. Gilbert, Colleen Josephson, Steve McGuire
arXiv preprint, 9 Oct 2024
[arXiv] [Code]
Video2Game: Real-time, Interactive, Realistic and Browser-Compatible Environment from a Single Video
Hongchi Xia, Zhi-Hao Lin, Wei-Chiu Ma, Shenlong Wang
CVPR 2024, 15 Apr 2024
[arXiv] [Project] [Code]
Towards a Robust Framework for NeRF Evaluation
Adrian Azzarelli, Nantheera Anantrasirichai, David R Bull
arXiv preprint, 29 May 2023
[arXiv]
NeRF View Synthesis: Subjective Quality Assessment and Objective Metrics Evaluation
Pedro Martin, Antonio Rodrigues, Joao Ascenso, Maria Paula Queluz
arXiv preprint, 30 May 2024
[arXiv]
Magic NeRF Lens: Interactive Fusion of Neural Radiance Fields for Virtual Facility Inspection
Ke Li, Susanne Schmidt, Tim Rolff, Reinhard Bacher, Wim Leemans, Frank Steinicke
TCVG, 19 Jul 2023
[arXiv]
CAD-NeRF: Learning NeRFs from Uncalibrated Few-view Images by CAD Model Retrieval
Xin Wen, Xuening Zhu, Renjiao Yi, Zhifeng Wang, Chenyang Zhu, Kai Xu
Frontiers of Computer Science, 5 Nov 2024
[arXiv]
Improving NeRF with Height Data for Utilization of GIS Data
Hinata Aoki, Takao Yamanaka
ICIP 2023, 15 Jul 2023
[arXiv]
Neural Elevation Models for Terrain Mapping and Path Planning
Adam Dai, Shubh Gupta, Grace Gao
arXiv preprint, 24 May 2024
[arXiv]
Sat-NeRF: Learning Multi-View Satellite Photogrammetry With Transient Objects and Shadow Modeling Using RPC Cameras
Roger Marí, Gabriele Facciolo, Thibaud Ehret
CVPR 2022, 16 Mar 2022
[arXiv]
SparseSat-NeRF: Dense Depth Supervised Neural Radiance Fields for Sparse Satellite Images
Lulin Zhang, Ewelina Rupnik
ISPRS Annals 2023, 1 Sep 2023
[arXiv] [Github]
Enabling Neural Radiance Fields (NeRF) for Large-scale Aerial Images -- A Multi-tiling Approaching and the Geometry Assessment of NeRF
Ningli Xu, Rongjun Qin, Debao Huang, Fabio Remondino
arXiv preprint, 1 Oct 2023
[arXiv]
Dynamic Occupancy Grids for Object Detection: A Radar-Centric Approach
Max Peter Ronecker, Markus Schratter, Lukas Kuschnig, Daniel Watzenig
ICRA 2024, 2 Feb 2024
[arXiv]
BirdNeRF: Fast Neural Reconstruction of Large-Scale Scenes From Aerial Imagery
Huiqing Zhang, Yifei Xue, Ming Liao, Yizhen Lao
arXiv preprint, 7 Feb 2024
[arXiv]
Aerial Lifting: Neural Urban Semantic and Building Instance Lifting from Aerial Imagery
Yuqi Zhang, Guanying Chen, Jiaxing Chen, Shuguang Cui
CVPR 2024, 18 Mar 2024
[arXiv] [Project] [Code] [Video]
SAT-NGP : Unleashing Neural Graphics Primitives for Fast Relightable Transient-Free 3D reconstruction from Satellite Imagery
Camille Billouard, Dawa Derksen, Emmanuelle Sarrazin, Bruno Vallet
IGARSS 2024, 27 Mar 2024
[arXiv] [Code]
Aerial-NeRF: Adaptive Spatial Partitioning and Sampling for Large-Scale Aerial Rendering
Xiaohan Zhang, Yukui Qiu, Zhenyu Sun, Qi Liu
arXiv preprint, 10 May 2024
[arXiv]
Multiplane Prior Guided Few-Shot Aerial Scene Rendering&
Zihan Gao, Licheng Jiao, Lingling Li, Xu Liu, Fang Liu, Puhua Chen, Yuwei Guo
CVPR 2024, 7 Jun 2024
[arXiv]
psPRF:Pansharpening Planar Neural Radiance Field for Generalized 3D Reconstruction Satellite Imagery
Tongtong Zhang, Yuanxiang Li
arXiv preprint, 22 Jun 2024
[arXiv]
Domain Generalization for 6D Pose Estimation Through NeRF-based Image Synthesis
Antoine Legrand, Renaud Detry, Christophe De Vleeschouwer
arXiv preprint, 15 Jul 2024
[arXiv]
BRDF-NeRF: Neural Radiance Fields with Optical Satellite Images and BRDF Modelling
Lulin Zhang, Ewelina Rupnik, Tri Dung Nguyen, Stéphane Jacquemoud, Yann Klinger
arXiv preprint, 18 Sep 2024
[arXiv]
Exploring Seasonal Variability in the Context of Neural Radiance Fields for 3D Reconstruction on Satellite Imagery
Liv Kåreborn, Erica Ingerstad, Amanda Berg, Justus Karlsson, Leif Haglund
arXiv preprint, 5 Nov 2024
[arXiv]
Active Human Pose Estimation via an Autonomous UAV Agent
Jingxi Chen, Botao He, Chahat Deep Singh, Cornelia Fermuller, Yiannis Aloimonos
arXiv preprint, 1 Jul 2024
[arXiv]
Radiance Field Learners As UAV First-Person Viewers
Liqi Yan, Qifan Wang, Junhan Zhao, Qiang Guan, Zheng Tang, Jianhui Zhang, Dongfang Li
ECCV 2024, 10 Aug 2024
[arXiv] [Project]
CopyRNeRF: Protecting the CopyRight of Neural Radiance Fields
Ziyuan Luo, Qing Guo, Ka Chun Cheung, Simon See, Renjie Wan
ICCV 2023, 21 Jul 2023
[arXiv] [Project]
Steganography for Neural Radiance Fields by Backdooring
Weina Dong, Jia Liu, Yan Ke, Lifeng Chen, Wenquan Sun, Xiaozhong Pan
arXiv preprint, 19 Sep 2023
[arXiv]
Targeted Adversarial Attacks on Generalizable Neural Radiance Fields
Andras Horvath, Csaba M. Jozsa
*arXiv preprint, 5 Oct 2023
[arXiv]
Noise-NeRF: Hide Information in Neural Radiance Fields using Trainable Noise
Qinglong Huang, Yong Liao, Yanbin Hao, Pengyuan Zhou
arXiv preprint, 2 Jan 2024
[arXiv]
WateRF: Robust Watermarks in Radiance Fields for Protection of Copyrights
Youngdong Jang, Dong In Lee, MinHyuk Jang, Jong Wook Kim, Feng Yang, Sangpil Kim
arXiv preprint, 3 May 2024
[arXiv]
Protecting NeRFs' Copyright via Plug-And-Play Watermarking Base Model
Qi Song, Ziyuan Luo, Ka Chun Cheung, Simon See, Renjie Wan
ECCV 2024, 10 Jul 2024
[arXiv] [Project]
GeometrySticker: Enabling Ownership Claim of Recolorized
Xiufeng Huang, Ka Chun Cheung, Simon See, Renjie Wan
arXiv preprint, 18 Jul 2024
[arXiv] [Project]
IPA-NeRF: Illusory Poisoning Attack Against Neural Radiance Fields
Wenxiang Jiang, Hanwei Zhang, Shuo Zhao, Zhongwen Guo, Hao Wang
arXiv preprint, 16 Jul 2024
[arXiv] [Code]
NeRF: Privacy-preserving Training Framework for NeRF
Bokang Zhang, Yanglin Zhang, Zhikun Zhang, Jinglan Yang, Lingying Huang, Junfeng Wu
arXiv preprint, 3 Sep 2024
[arXiv]
3D Motion Magnification: Visualizing Subtle Motions with Time Varying Radiance Fields
Brandon Y. Feng, Hadi Alzayer, Michael Rubinstein, William T. Freeman, Jia-Bin Huang
ICCV 2023, 7 Aug 2023
[arXiv] [Project]
C-NERF: Representing Scene Changes as Directional Consistency Difference-based NeRF
Rui Huang, Binbin Jiang, Qingyi Zhao, William Wang, Yuxiang Zhang, Qing Guo
arXiv 2023, 5 Dec 2023
[arXiv]
Irregularity Inspection using Neural Radiance Field
Tianqi Ding, Dawei Xiang
arXiv preprint, 21 Aug 2024
[arXiv]
MVImgNet: A Large-scale Dataset of Multi-view Images
Xianggang Yu, Mutian Xu, Yidan Zhang, Haolin Liu, Chongjie Ye, Yushuang Wu, Zizheng Yan, Chenming Zhu, Zhangyang Xiong, Tianyou Liang, Guanying Chen, Shuguang Cui, Xiaoguang Han
CVPR 2023, 10 Mar 2023
[arXiv] [Project] [Github] [Notes]
DiVA-360: The Dynamic Visuo-Audio Dataset for Immersive Neural Fields
Cheng-You Lu, Peisen Zhou, Angela Xing, Chandradeep Pokhariya, Arnab Dey, Ishaan Shah, Rugved Mavidipalli, Dylan Hu, Andrew Comport, Kefan Chen, Srinath Sridhar
arXiv preprint, 31 Jul 2023
[arXiv] [Project]
SingingHead: A Large-scale 4D Dataset for Singing Head Animation
Sijing Wu, Yunhao Li, Weitian Zhang, Jun Jia, Yucheng Zhu, Yichao Yan, Guangtao Zhai
arXiv preprint, 7 Dec, 2023
[arXiv] [Project] [Code]
Implicit-Zoo: A Large-Scale Dataset of Neural Implicit Functions for 2D Images and 3D Scenes
Qi Ma, Danda Pani Paudel, Ender Konukoglu, Luc Van Gool
arXiv preprint, 25 Jun 2024
[arXiv]
Neural Kernel Surface Reconstruction
Jiahui Huang, Zan Gojcic, Matan Atzmon, Or Litany, Sanja Fidler, Francis Williams
CVPR 2023, 31 May 2023
[arXiv]
Neuralangelo: High-Fidelity Neural Surface Reconstruction
Zhaoshuo Li, Thomas Müller, Alex Evans, Russell H. Taylor, Mathias Unberath, Ming-Yu Liu, Chen-Hsuan Lin
CVPR 2023, 5 Jun 2023
[arXiv] [Project] [Video]
GridFormer: Point-Grid Transformer for Surface Reconstruction
Shengtao Li, Ge Gao, Yudong Liu, Yu-Shen Liu, Ming Gu
arXiv preprint, 4 Jan 2024
[arXiv] [Code]
PSDF: Prior-Driven Neural Implicit Surface Learning for Multi-view Reconstruction
Wanjuan Su, Chen Zhang, Qingshan Xu, Wenbing Tao
arXiv preprint, 23 Jan 2024
[arXiv]
Binary Opacity Grids: Capturing Fine Geometric Detail for Mesh-Based View Synthesis
Christian Reiser, Stephan Garbin, Pratul P. Srinivasan, Dor Verbin, Richard Szeliski, Ben Mildenhall, Jonathan T. Barron, Peter Hedman, Andreas Geiger
SIGGRAPH 2024, 19 Feb 2024
[arXiv] [Project] [Video]
Repurposing Diffusion-Based Image Generators for Monocular Depth Estimation
Bingxin Ke, Anton Obukhov, Shengyu Huang, Nando Metzger, Rodrigo Caye Daudt, Konrad Schindler
CVPR 2024, 4 Dec, 2023
[arXiv] [Project] [Code]
NeRFmentation: NeRF-based Augmentation for Monocular Depth Estimation
Casimir Feldmann, Niall Siegenheim, Nikolas Hars, Lovro Rabuzin, Mert Ertugrul, Luca Wolfart, Marc Pollefeys, Zuria Bauer, Martin R. Oswald
arXiv preprint, 8 Jan 2024
[arXiv]
🔥DepthCrafter: Generating Consistent Long Depth Sequences for Open-world Videos
Wenbo Hu, Xiangjun Gao, Xiaoyu Li, Sijie Zhao, Xiaodong Cun, Yong Zhang, Long Quan, Ying Shan
arXiv preprint, 3 Sep 2024
Abstract
Despite significant advancements in monocular depth estimation for static images, estimating video depth in the open world remains challenging, since open-world videos are extremely diverse in content, motion, camera movement, and length. We present DepthCrafter, an innovative method for generating temporally consistent long depth sequences with intricate details for open-world videos, without requiring any supplementary information such as camera poses or optical flow. DepthCrafter achieves generalization ability to open-world videos by training a video-to-depth model from a pre-trained image-to-video diffusion model, through our meticulously designed three-stage training strategy with the compiled paired video-depth datasets. Our training approach enables the model to generate depth sequences with variable lengths at one time, up to 110 frames, and harvest both precise depth details and rich content diversity from realistic and synthetic datasets. We also propose an inference strategy that processes extremely long videos through segment-wise estimation and seamless stitching. Comprehensive evaluations on multiple datasets reveal that DepthCrafter achieves state-of-the-art performance in open-world video depth estimation under zero-shot settings. Furthermore, DepthCrafter facilitates various downstream applications, including depth-based visual effects and conditional video generation.Diffusion Models in Low-Level Vision: A Survey
Chunming He, Yuqi Shen, Chengyu Fang, Fengyang Xiao, Longxiang Tang, Yulun Zhang, Wangmeng Zuo, Zhenhua Guo, Xiu Li
arXiv preprint, 17 Jun 2024
[arXiv0] [Code]
EEG-Driven 3D Object Reconstruction with Color Consistency and Diffusion Prior
Xin Xiang, Wenhui Zhou, Guojun Dai
arXiv preprint, 28 Oct 2024
[arXiv]
GUMBEL-NERF: Representing Unseen Objects as Part-Compositional Neural Radiance Fields
Yusuke Sekikawa, Chingwei Hsu, Satoshi Ikehata, Rei Kawakami, Ikuro Sato
ICIP 2024, 27 Oct 2024
[arXiv]
Thanks to the community and hoping more and more people are joining us and submit commits and PRs!
Made with contributors-img.
CC-0