-
-
Notifications
You must be signed in to change notification settings - Fork 68
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
batch_method #181
Comments
Hi @porternw, Thank you for bringing this to my attention. I have released a possible fix in Unprompted v9.13.1. Should you have a need to debug further, please be aware that the |
Thanks for working on this! Still not functioning correctly, though. Interestingly, while the first Lora gets applied to all generations, the second in the batch shows a second random Lora in the prompt (though obviously applies the first Lora), while the rest of the prompts in the batch show the first Lora instead of random ones. The "safe" mode works as expected. ` 22:09:59-204471 ERROR Running script process batch: /home/nporter/SD/automatic/extensions/_unprompted/scripts/unprompted.py: KeyError |
Thanks @porternw, I haven't had a chance yet to investigate further but I'll re-open the issue so I don't forget. |
Hi @porternw, I examined the behavior of batch_count with loras more closely but was not able to reproduce the issue you described. Here is the prompt I tested at a batch_count of 3:
I can see in the "Lora hashes" section of the generation info that a different network was applied to each image. There are no error messages in my console, either. Some general diagnostic questions if you don't mind:
Thanks. |
I think I've found the problem. I'm running the SD.next fork of a1111...which I didn't even think about initially when reporting this. Anyway, comparing the two repositories, it seems that vlad's version doesn't include the attribute "extra_network_data" which is causing the failure seen above. Removing that reference from the batch_process function in unprompted.py seems to resolve the issue (the error message goes away and the loras load normally), though I don't know if there are any consequences of that removal. Edit: Nope I was wrong, got excited and forgot to change the config back to "Standard". SD.next, up-to-date During handling of the above exception, another exception occurred: ╭─────────────────────────────────────────────────────────────────── Traceback (most recent call last) ───────────────────────────────────────────────────────────────────╮ |
Thanks, @porternw. It sounds like this issue relates to SD.Next specifically, and judging from the error log, you're on the right track with the Removing the reference on line 739 may prevent loras from behaving correctly in a batch process, but it should circumvent the error message. I would try disabling it like this:
Can you confirm that this at least prevents the crash? There's another section of code later on--line 793--which is responsible for updating lora networks after processing each prompt in a batch run. It uses the Ultimately, both references to |
Still errors with changing that line. 12:33:37-380792 ERROR Running script process batch: /home/nporter/SD/automatic/extensions/_unprompted/scripts/unprompted.py: AttributeError Thanks for looking into this! |
New batch method is ignoring additional Lora's included in a [choose] block when iterating through the batch count. The Lora from the first batch gets applied to all, despite a changing prompt (you can tell it's not even loading because the command line log only shows the loading of the first Lora). When I choose "legacy" for batch_method it chooses a new Lora for each generation as expected.
The text was updated successfully, but these errors were encountered: