-
Notifications
You must be signed in to change notification settings - Fork 6
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Dataset question #15
Comments
Hi, it is implemented in this repo. The views are selected such that the interval between each consecutive pair is fixed. For example, for 3 views, we select the first frame, the middle frame and the last frame. |
Thanks for your reply. I see the corresponding code in dataset_readers.py for line187. `elif eval and num_images > 0: |
Hi, for an array, assigning the value to its last index does not mean append. It means the replacement. |
Amazing, I discovery the other work like FSGS, Instantsplat, that chooses the specific number test view(like 12). I have to admire your excellent work again. |
Yes, they are tested in the same evaluation protocol. |
Thank you a lot. |
We only use the training frames when constructing the coarse solution. |
Hi,
Looking forward your reply sincerely. |
Hi, yes, and actually you don't need to pass in all the frames into the training. That's only because the training and evaluation share the same data loader. If you only have 3 or more views and don't want to evaluate the performance, it's also fine to pass in them only into the training. |
Thank you a lot. |
Hi, 1. No. 2. Yes. 3. I don’t quite understand what u mean. The split of training and testing views are enabled through adding the eval flag, which is consistent with the original 3DGS implementation. |
Oh.... sorry, I didn't read the code clearly. I got it. |
Notice that, in ur so-called 'post-training' stage, only the extrinsics are optimized. |
Okayokay,hhh! thank you for your patience. As for registration of testing view, "The camera pose of the next unregistered testing view is initialized with the corresponding value for the last registered testing view." as describe in paper. |
There is no alignment for testing views. Testing views are only for testing and we only want to know their camera postions. |
① For off-the-shelf methods: just only initial the testing view poses through sfm without BA, and then don't we need to align the testing view pose with the training view pose? ② For do not rely on off-the-shelf methods: "post-training optimization of the camera poses of the testing views, based on the RGB loss only." Don't we need to align the testing view pose with the training view pose? When we render.py, don't we need to render the testing view according to the point cloud of the training view? I think the the testing pose and training pose are not in the same coordinate. Looking forward your reply sincerely. |
There is no alignment between testing pose and training pose. There is only registration of testing pose. Training view doesn’t possess the point cloud. |
Sorry,
Thank you. I thought about these questions very carefully. I hope you can point out where is wrong and solve my confusion. Looking forward to your reply sincerely. |
Be careful with terms. Alignment refers to something specific in this work. I believe your understanding is fine, but your description is not rigorous. |
The purpose of the original 3dgs that colmap all images is to put the cameras of all images under a unified coordinate system. And for the few-shot work where our training and testing view are separate, don't we need to put them in a unified coordinate? Reference: (NVlabs/InstantSplat#11) |
Hi, you are correct. The testing views need to be separately registered after the training. But you should not call it ‘alignment’ to avoid confusion because it refers to something else in this work. |
Thank you for your correction. |
Check out 'eval.py' and the supplementary of the paper. |
sorry, I find a discrepancy between the code and the supplementary description:
This includes not only rgb loss, but also corresponding loss. |
Yeah, you can remove the correspondence loss there. It should make little difference. I cleaned the original messy implementation by rewriting most parts. I think I added it there at that time because it helps the metrics in some cases when doing the verification. |
If you have further questions, you are welcome to send your contact to my email (can be found in the paper). I will contact you through that. |
Thanks a lot. Much appreciated! |
Hi, thanks for your excellent work.
May I ask what is the specific settings of selecting sparse view (training view and test view) for Tanks dataset in paper, such as n=3,n=6, and n=12.
The text was updated successfully, but these errors were encountered: