Replies: 1 comment
-
On a lark I decided to downgrade my Nvidia driver to 531. I ran the same simple 768x768 generation again with upscaling to twice the size and got this error. So yes, somehow I managed to bork CUDA at least where SDNext is concerned. My A1111 instance performed fine (even slightly faster) with the older driver.
|
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
I was poking around with pip and trying to upgrade some libraries related to CUDA. It didn't work, so I attempted to back out my changes. Everything appeared to work fine again, but then I ran into a brick wall.
Every time I try to use hires-fix to upscale anything larger than 512x512, the process drastically chews up GPU memory and slows down. In the screen shot below, the first run was a single 512x512 image with the second-pass Lanczos upscaler set to double the image size to 1024x0124. The second one is the exact same, but with the initial image size set to 640x640. All my added plugins are turned off in both cases.
It should be noted that while the second image says only 4.69 out of 24G of memory was used, my Windows Task Manager showed the GPU running at full 24G capacity during the entire 14 minutes it took to complete the image.
Here's what I've tried so far that hasn't worked.
The last one really gob-smacked me because a fresh install with no connection to the other instance exhibited the same behavior. So to that end I tried:
None of which helped.
I also have A1111 installed on my machine, so I tried it with the exact same parameters and it does not encounter the upscaling slowdown; the 640x640 image upscaling only took a few seconds. So somehow, this is unique to running SDNext.
Has anyone seen something like this happen or have a suggestion as to what I can do to fix it?
For ref, here's my startup settings. I'm running with a 4090.
Beta Was this translation helpful? Give feedback.
All reactions