Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Temporal Fusion Error with Kitti Format Dataset #372

Open
malaz24 opened this issue Dec 7, 2024 · 0 comments
Open

Temporal Fusion Error with Kitti Format Dataset #372

malaz24 opened this issue Dec 7, 2024 · 0 comments

Comments

@malaz24
Copy link

malaz24 commented Dec 7, 2024

Describe the bug
I am using BVEDepth4D with a kitti format dataset, but I cannot run it on more than one adjacent frame. When I use multiple adjacent frames I get this error:
RuntimeError: stack expects each tensor to be equal size, but got [25, 3, 480, 752] at entry 0 and [10, 3, 480, 752] at entry 9

Reproduction

  1. What command or script did you run?
A placeholder for the command.
  1. Did you make any modifications on the code or config? Did you understand what you have modified?
  2. What dataset did you use?

Environment

  1. Please run python mmdet3d/utils/collect_env.py to collect necessary environment information and paste it here.
  2. You may add addition that may be helpful for locating the problem, such as
    • How you installed PyTorch [e.g., pip, conda, source]
    • Other environment variables that may be related (such as $PATH, $LD_LIBRARY_PATH, $PYTHONPATH, etc.)

Error traceback
If applicable, paste the error trackback here.
2024-12-07 11:57:19,418 - mmdet - INFO - Epoch [1][50/1357] lr: 2.458e-05, eta: 12:38:39, time: 3.367, data_time: 0.543, memory: 31377, loss_depth: 17.7308, task0.loss_xy: 1.5562, task0.loss_z: 0.9970, task0.loss_whl: 3.4447, task0.loss_yaw: 2.3230, task0.loss_vel: 1.3917, task0.loss_heatmap: 257.0275, loss: 284.4708, grad_norm: 4975.9456
Traceback (most recent call last):
File "tools/train.py", line 281, in
main()
File "tools/train.py", line 270, in main
train_model(
File "/users/user/person3d/3D_person_detection/ BEVDet-dev3.0/mmdet3d/apis/train.py", line 344, in train_model
train_detector(
File "/users/user/person3d/3D_person_detection/ BEVDet-dev3.0/mmdet3d/apis/train.py", line 319, in train_detector
runner.run(data_loaders, cfg.workflow)
File "/users/user/miniconda3/envs/bevdet/lib/python3.8/site-packages/mmcv/runner/epoch_based_runner.py", line 136, in run
epoch_runner(data_loaders[i], **kwargs)
File "/users/user/miniconda3/envs/bevdet/lib/python3.8/site-packages/mmcv/runner/epoch_based_runner.py", line 49, in train
for i, data_batch in enumerate(self.data_loader):
File "/users/user/miniconda3/envs/bevdet/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 521, in next
data = self._next_data()
File "/users/user/miniconda3/envs/bevdet/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 1203, in _next_data
return self._process_data(data)
File "/users/user/miniconda3/envs/bevdet/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 1229, in _process_data
data.reraise()
File "/users/user/miniconda3/envs/bevdet/lib/python3.8/site-packages/torch/_utils.py", line 434, in reraise
raise exception
RuntimeError: Caught RuntimeError in DataLoader worker process 1.
Original Traceback (most recent call last):
File "/users/user/miniconda3/envs/bevdet/lib/python3.8/site-packages/torch/utils/data/_utils/worker.py", line 287, in _worker_loop
data = fetcher.fetch(index)
File "/users/user/miniconda3/envs/bevdet/lib/python3.8/site-packages/torch/utils/data/_utils/fetch.py", line 52, in fetch
return self.collate_fn(data)
File "/users/user/miniconda3/envs/bevdet/lib/python3.8/site-packages/mmcv/parallel/collate.py", line 79, in collate
return {
File "/users/user/miniconda3/envs/bevdet/lib/python3.8/site-packages/mmcv/parallel/collate.py", line 80, in
key: collate([d[key] for d in batch], samples_per_gpu)
File "/users/user/miniconda3/envs/bevdet/lib/python3.8/site-packages/mmcv/parallel/collate.py", line 77, in collate
return [collate(samples, samples_per_gpu) for samples in transposed]
File "/users/user/miniconda3/envs/bevdet/lib/python3.8/site-packages/mmcv/parallel/collate.py", line 77, in
return [collate(samples, samples_per_gpu) for samples in transposed]
File "/users/user/miniconda3/envs/bevdet/lib/python3.8/site-packages/mmcv/parallel/collate.py", line 84, in collate
return default_collate(batch)
File "/users/user/miniconda3/envs/bevdet/lib/python3.8/site-packages/torch/utils/data/_utils/collate.py", line 63, in default_collate
return torch.stack(batch, 0, out=out)
RuntimeError: stack expects each tensor to be equal size, but got [25, 3, 480, 752] at entry 0 and [10, 3, 480, 752] at entry 9

A placeholder for trackback.

Bug fix
If you have already identified the reason, you can provide the information here. If you are willing to create a PR to fix it, please also leave a comment here and that would be much appreciated!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant