Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Solved #4

Open
muses0229 opened this issue Jul 5, 2024 · 3 comments
Open

Solved #4

muses0229 opened this issue Jul 5, 2024 · 3 comments

Comments

@muses0229
Copy link

muses0229 commented Jul 5, 2024

No description provided.

@imlixinyang
Copy link
Owner

We have tested some cases on a single RTX 3090. The memory cost is very close to the maximum so we gave a simple solution. You can try running the refining separately to avoid OOM. For example:

  1. generate camera and 3DGS without refining:
python inference.py --export_all --text '{text}' --num_refine_steps 0 --num_samples 4
  1. see the results in exps/tmp/videos and choose a sample (filename) for separate refining:
python refine.py --ply 'exps/tmp/ply/{filename}.ply'  --camera 'exps/tmp/camera/{filename}.npy' --export_all --text '{text}' --num_refine_steps 1000

This has been tested on a single T4 GPU (16 GB). Let me know if it works!

@githubnameoo
Copy link

We have tested some cases on a single RTX 3090. The memory cost is very close to the maximum so we gave a simple solution. You can try running the refining separately to avoid OOM. For example:

  1. generate camera and 3DGS without refining:
python inference.py --export_all --text '{text}' --num_refine_steps 0 --num_samples 4
  1. see the results in exps/tmp/videos and choose a sample (filename) for separate refining:
python refine.py --ply 'exps/tmp/ply/{filename}.ply'  --camera 'exps/tmp/camera/{filename}.npy' --export_all --text '{text}' --num_refine_steps 1000

This has been tested on a single T4 GPU (16 GB). Let me know if it works!

How much gpu memory is actually needed to test?

@imlixinyang
Copy link
Owner

Not a sure number. Since the number of Gaussian points varies for different scenes during refining, the GPU cost also varies.
Typically, 28GB is basically enough for running the refining jointly with generation, and 16GB is enough for running the refining separately.
@githubnameoo

@muses0229 muses0229 changed the title OOM during training Solved Dec 30, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants