-
Notifications
You must be signed in to change notification settings - Fork 39
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Quality degradation after multi-inferences #54
Comments
I would like to understand when I should enable and when I should disable it. Is it necessary to disable after every inference, and should I enable it before starting a new inference? Super Thanks! |
you need disable and reapply in next inference only if you change params. I found this bug too and solved doing this. |
Hi Super Thanks! If I only change the seed or prompt, do I also need to reapply it? Thanks
获取 Outlook for iOS<https://aka.ms/o0ukef>
…________________________________
发件人: Eliseu Silva ***@***.***>
发送时间: Wednesday, November 20, 2024 7:32:25 AM
收件人: horseee/DeepCache ***@***.***>
抄送: nini_good ***@***.***>; Author ***@***.***>
主题: Re: [horseee/DeepCache] Quality degradation after multi-inferences (Issue #54)
I would like to understand when I should enable and when I should disable it. Is it necessary to disable after every inference, and should I enable it before starting a new inference? Super Thanks!
you need disable and reapply in next inference only if you change params. I found this bug too and solved doing this.
―
Reply to this email directly, view it on GitHub<#54 (comment)>, or unsubscribe<https://github.com/notifications/unsubscribe-auth/AL72YMYZN4O2YAHM25QTRYT2BPDATAVCNFSM6AAAAABR67PXJSVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDIOBWHE3TOMRRGQ>.
You are receiving this because you authored the thread.Message ID: ***@***.***>
|
Sorry, i did a test now, you always need disable and enable again in each inference even if params is not changed. |
Could you kindly assist me in understanding how to reproduce this issue? Would it be possible for you to share some sample code that could help in reproducing it?
|
I may only encounter the problem when testing with thousands of images. Thank you very much for your help |
@973398769 helper.disable()
helper.set_params(cache_interval=3, cache_branch_id=0)
helper.enable() this need be before you call pipe() |
I would like to reproduce the issue of quality degradation, but I only detect this problem locally when testing with thousands of images. Could you please tell me how to make this issue appear more quickly? Thank you. |
This is my test pipeline: |
I don't know if we're talking about the same problem, I'm working with SDXL, but I reuse the pipeline without deleting the variable or restarting the script, because I'm in a Gradio interface. I don't know how you're doing it, if you're running this code inside a loop and reusing the same pipeline variable or if you're destroying the variable. For me, if I order the first inference of an image, it generates it normally, then if I try to generate it again without disabling it first, it already generates the deteriorated image. Like this: |
Are you using generate_pipe.enable_model_cpu_offload() in your inference? Enabling this could cause the issue as detailed here: https://www.kaggle.com/code/ledrose/deepcache-cpu-offload-bug |
We seem to be discussing the same issue. Although I haven't used enable_model_cpu_offload(), I still experience a decrease in quality when inferring a large number of images, but it's hard to reproduce consistently |
yes i use cpu offload then we need do it disable and enable again inside loop |
how are you generating images, are you using loop or setting number of images in pipeline? |
This is my current setup: helper.set_params(cache_interval=3, cache_branch_id=0). When processing a large amount of images, I've noticed some quality degradation, but I'm not sure if it's caused by DeepCache.
The text was updated successfully, but these errors were encountered: