-
Notifications
You must be signed in to change notification settings - Fork 17
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Solved #4
Comments
We have tested some cases on a single RTX 3090. The memory cost is very close to the maximum so we gave a simple solution. You can try running the refining separately to avoid OOM. For example:
python inference.py --export_all --text '{text}' --num_refine_steps 0 --num_samples 4
python refine.py --ply 'exps/tmp/ply/{filename}.ply' --camera 'exps/tmp/camera/{filename}.npy' --export_all --text '{text}' --num_refine_steps 1000 This has been tested on a single T4 GPU (16 GB). Let me know if it works! |
How much gpu memory is actually needed to test? |
Not a sure number. Since the number of Gaussian points varies for different scenes during refining, the GPU cost also varies. |
No description provided.
The text was updated successfully, but these errors were encountered: