Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Hybrid Localization with Points and Lines #68

Open
MINJEEgi opened this issue Mar 11, 2024 · 2 comments
Open

Hybrid Localization with Points and Lines #68

MINJEEgi opened this issue Mar 11, 2024 · 2 comments

Comments

@MINJEEgi
Copy link

Hello, I would like to run my data through your LIMAP. I'm interested in changing the INPUT of the 'Hybrid Localization with Points and Lines' code to our data instead of using 7Scense. Could you let me know what's needed for that? It would be really appreciated!

@MarkYu98
Copy link
Collaborator

Sorry for the late reply. The main function for the hybrid point-line localization is limap.runners.line_localization (as in

final_poses = _runners.line_localization(
). To use your own data you'll have to prepare the corresponding configs, ImageCollection, and point and line correspondences etc. You can reference the script https://github.com/cvg/limap/blob/main/runners/7scenes/localization.py to see how these data should be provided to the function.

@MINJEEgi
Copy link
Author

MINJEEgi commented Aug 23, 2024

Thank you for your reply. I have an additional question regarding the depth map usage mentioned in the paper. The paper discusses using depth maps like those from RGB-D images. Can I choose between using depth images directly or generating depth maps through structure from motion (SfM) methods like COLMAP? Is there any code available that uses only depth images and RGB image data, without relying on COLMAP for depth map generation?

In summary, I want to do E. Line Reconstruction given Depth Maps from the paper.

The readme mentions a "Using auxiliary depth maps" section, which is customized for the Hyperism dataset. In the "Localization with points&lines" section, there's an option to use --use_dense_depth. It also requires running hloc. Is hloc only needed for evaluation? When I run this code with --use_dense_depth, am I correctly using only the depth image? Or should i use fitnmerge.py using ETH3D format or ScanNet format?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants