Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Help!RuntimeError: a leaf Variable that requires grad is being used in an in-place operation. #203

Open
doqizuo opened this issue Nov 29, 2024 · 4 comments

Comments

@doqizuo
Copy link

doqizuo commented Nov 29, 2024

creating data loader...
creating model and diffusion...
training...
Traceback (most recent call last):
File "scripts/segmentation_train.py", line 118, in
main()
File "scripts/segmentation_train.py", line 70, in main
TrainLoop(
File "D:\MedSegDiff-master.\guided_diffusion\train_util.py", line 83, in init
self._load_and_sync_parameters()
File "D:\MedSegDiff-master.\guided_diffusion\train_util.py", line 139, in _load_and_sync_parameters
dist_util.sync_params(self.model.parameters())
File "D:\MedSegDiff-master.\guided_diffusion\dist_util.py", line 111, in sync_params
dist.broadcast(p, 0)
File "D:\Anaconda\envs\sg\lib\site-packages\torch\distributed\distributed_c10d.py", line 1195, in broadcast
work.wait()
RuntimeError: a leaf Variable that requires grad is being used in an in-place operation.

Plz!Tell me where the problem lies? and why? Hope someone nice can help me~ Thanks!!!

@Eins152
Copy link

Eins152 commented Dec 4, 2024

Try add p = p + 0 in the sync_params function within dist_util.py as follows:
def sync_params(params):
"""
Synchronize a sequence of tensors across ranks from rank 0.
"""
for p in params:
with th.no_grad():
p = p + 0
dist.broadcast(p, 0)

@Eins152
Copy link

Eins152 commented Dec 4, 2024

#84

@Issues-translate-bot
Copy link

Bot detected the issue body's language is not English, translate it automatically. 👯👭🏻🧑‍🤝‍🧑👫🧑🏿‍🤝‍🧑🏻👩🏾‍🤝‍👨🏿👬🏿


#84

@doqizuo
Copy link
Author

doqizuo commented Dec 6, 2024

Try add p = p + 0 in the sync_params function within dist_util.py as follows: def sync_params(params): """ Synchronize a sequence of tensors across ranks from rank 0. """ for p in params: with th.no_grad(): p = p + 0 dist.broadcast(p, 0)

THANKSSSSSS!!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants