basemodel的finetune是不能分布式微调训练吗?设置了gpu_num=2还是每个GPU单独加载模型的,是什么原因? #592
-
basemodel的finetune是不能分布式微调训练吗?设置了gpu_num=2还是每个GPU单独加载模型的,是什么原因? |
Beta Was this translation helpful? Give feedback.
Answered by
zRzRzRzRzRzRzR
Dec 14, 2023
Replies: 1 comment 1 reply
-
因为一张16G的显卡不行,想用两张卡,怎么弄都不行,都是每张显卡单独加载,单独训练的 |
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
不能