You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thank you for providing a way to finetune LTX-video. Training seems to work well, but when I try changing batching (in this case to 4), I encounter this exception. I'm using the suggested example script in the repo. Running on diffusers from github, updated two days ago.
This is the exception:
12/21/2024 12:38:27 - ERROR - finetrainers - Traceback (most recent call last):
File "/some_path/train.py", line 34, in main
trainer.train()
File "/some_path/trainer.py", line 424, in train
noisy_latents = (1.0 - sigmas) * latent_conditions["latents"] + sigmas * noise
RuntimeError: The size of tensor a (4) must match the size of tensor b (128) at non-singleton dimension 2
Also hijacking my own issue:
I tried adding --resume_from_checkpoint since that worked with the old cogvideox-factory trainer. But it doesn't seem to do anything now. Is that correct?
Thanks!
Information / 问题信息
The official example scripts / 官方的示例脚本
My own modified scripts / 我自己修改的脚本和任务
Reproduction / 复现过程
Change --batch_size 1 to something other than 1
Expected behavior / 期待表现
Training would continue with batching enabled
The text was updated successfully, but these errors were encountered:
System Info / 系統信息
Thank you for providing a way to finetune LTX-video. Training seems to work well, but when I try changing batching (in this case to 4), I encounter this exception. I'm using the suggested example script in the repo. Running on diffusers from github, updated two days ago.
This is the exception:
Also hijacking my own issue:
I tried adding
--resume_from_checkpoint
since that worked with the old cogvideox-factory trainer. But it doesn't seem to do anything now. Is that correct?Thanks!
Information / 问题信息
Reproduction / 复现过程
Change
--batch_size 1
to something other than 1Expected behavior / 期待表现
Training would continue with batching enabled
The text was updated successfully, but these errors were encountered: