-
Notifications
You must be signed in to change notification settings - Fork 413
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
GPU Out of Memory Issue #137
Comments
@MaureenZOU @jwyang In
but I think my setting is doing something wrong. |
same problem!! |
Have you solved this problem? |
Hi, @dongho-Han , I noticed that in your script you used TEST.BATCH_SIZE_TOTAL 8 on 4 GPUs, can you try change it to 4? |
Same suggestion, evaluating multiple images on a single image will cause: 1. Inaccurate evaluation (Because of padding). 2. OOM for GPU. I usually use 1 GPU for evaluation. |
Hi I was facing the same issue adding |
When I try to evaluate with your code, I met GPU Memory Issue.
Especially, running this code
CUDA_VISIBLE_DEVICES=0,1,2,3 mpirun -n 4 python entry.py evaluate --conf_files configs/seem/focalt_unicl_lang_v1.yaml --overrides COCO.INPUT.IMAGE_SIZE 1024 MODEL.DECODER.HIDDEN_DIM 512 MODEL.ENCODER.CONVS_DIM 512 MODEL.ENCODER.MASK_DIM 512 VOC.TEST.BATCH_SIZE_TOTAL 8 TEST.BATCH_SIZE_TOTAL 8 REF.TEST.BATCH_SIZE_TOTAL 8 FP16 True WEIGHT True RESUME_FROM ./pretrained/seem_focalt_v1.pt
Could you share how much memory is needed for evaluation?
Error log:
I used 4 Titan RTX with 24576MiB.
The text was updated successfully, but these errors were encountered: