-
Notifications
You must be signed in to change notification settings - Fork 27.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Enable multi-device support for DPT #31066
Conversation
cc @SunMarc |
Hi @OmarManzoor, not sure what it is happening. However, usually, to make it work on multi device, you need to put some modules in |
Hi @SunMarc, |
The plan is to switch to |
Then maybe I should mark this particular test to be skipped for DPT? |
Sure, but right now it doesn't work with multi gpu. When I changed to |
So I tried adding def test_model_parallelism(self):
config, inputs_dict = self.model_tester.prepare_config_and_inputs_for_common()
for model_class in self.all_model_classes:
if model_class._no_split_modules is None:
continue
inputs_dict_class = self._prepare_for_class(inputs_dict, model_class)
model = model_class(config).eval()
model = model.to(torch_device)
torch.manual_seed(0)
base_output = model(**inputs_dict_class)
model_size = compute_module_sizes(model)[""]
# We test several splits of sizes to make sure it works.
max_gpu_sizes = [int(p * model_size) for p in self.model_split_percents[1:]]
with tempfile.TemporaryDirectory() as tmp_dir:
model.cpu().save_pretrained(tmp_dir)
for max_size in max_gpu_sizes:
max_memory = {0: max_size, 1: model_size * 2, "cpu": model_size * 2}
new_model = model_class.from_pretrained(tmp_dir, device_map="sequential", max_memory=max_memory)
# Making sure part of the model will actually end up offloaded
> self.assertSetEqual(set(new_model.hf_device_map.values()), {0, 1})
E AssertionError: Items in the second set but not the first:
E 0
tests/test_modeling_common.py:3161: AssertionError |
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the contributing guidelines are likely to be ignored. |
What does this PR do?
Adds multi device for for Dpt. I tested this on a kaggle notebook with two T4 gpus.
Towards #29786
Before submitting
Pull Request section?
to it if that's the case.
documentation guidelines, and
here are tips on formatting docstrings.
Who can review?
@amyeroberts @ArthurZucker