-
Notifications
You must be signed in to change notification settings - Fork 2.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[bug]: Image generation fails with a server error when starting InvokeAI offline. #6623
Comments
With single-file (checkpoint) loading, diffusers still needs access to the models' configuration files. Previously, when we converted models, we used a local copy of these config files. With single-file loading, we are no longer referencing the local config files so diffusers is downloading them. The latest diffusers release revises the single-file loading logic. I think we'll need to upgrade to the latest version, then review the new API to see what our options are. |
This makes it completely ONLINE ONLY! |
@lstein Forgot to tag you - I think we should be able to fix this up pretty easily. |
Still the same error! InvokeAI demands internet connection to download config files that are already local! Every time you change the model! setting 'legacy_config_dir' in 'invokeai.yaml' doesn't help, it still demands internet. This big should be retitled to 'redundant yaml downloads , internet required' |
This comment was marked as off-topic.
This comment was marked as off-topic.
@jameswan That's an entirely different problem. Please create your own issue. Note that Invoke v2 is ancient. |
@someaccount1234 Yes, this is still a problem. We will close this issue when it is resolved. |
If some more StackTrace is needed, I provided it in my (duplicated) issue: |
Thanks @TobiasReich I saw that. This isn't a mysterious issue, the cause is very clear. I experimented the other day with providing the config files we already have on disc but It doesn't matter anyways, though, because |
Now, in the infinite canvas, every time a new image is uploaded, an internet connection is required. Maybe it's time to go back to the old version. |
@MOzhi327 No, that's not how it works. There's no internet connection needed when using canvas. What makes you think an internet connection is required? |
@psychedelicious Thank you for the reply. On my side, if the VPN is turned off, there is no way to load the model, as follows. |
@MOzhi327 Ok, thanks for clarifying. Yes, we know about the internet connectivity issue and will fix it. |
@psychedelicious Thank you very much |
The problem was introduced when we implemented single-file loading in v4.2.6 on 15 July 2024. We have a few large projects that are taking all contributors' time and which are both resource and technical blockers to resolving this issue. You do not need to use single file loading in the first place. You can convert your checkpoint/safetensors models to diffusers before going offline (button in the model manager) and then there's no internet connection needed to generate. |
@psychedelicious Thank you! It is useful, but if it is an external directory model, it will cost more additional disk space. For someone like me who mainly uses one or two models, it may not be a useful way for others who need to use many models. |
THANK YOU. |
Is there an existing issue for this problem?
Operating system
Windows
GPU vendor
Nvidia (CUDA)
GPU model
GTX 1660
GPU VRAM
6GB
Version number
4.2.6
Browser
Firefox
Python dependencies
No response
What happened
I am using SD 1.5 models in safetensor format without converting them to diffusers.
Image generation fails with a server error when starting InvokeAI offline.
Image generation starts only when connected to the internet for the first generation however subsequent generations can work offline until switching models.
The error reappears when switching models while offline.
What you expected to happen
InvokeAI should work offline.
How to reproduce the problem
No response
Additional context
No response
Discord username
No response
The text was updated successfully, but these errors were encountered: