Replies: 4 comments 7 replies
-
This could improve controlnet generations |
Beta Was this translation helpful? Give feedback.
0 replies
-
Personally the issue I'm having a lot of the time is that background details in some images tend to be indistinct and nonsensical, and it seems like this could fix that issue. |
Beta Was this translation helpful? Give feedback.
0 replies
-
It's crazy no one has implemented this yet. The improvement from my testing seem drastic. |
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
This is a new type off guidance that replaces CFG scale and seems to improve the coherence of fine details quite a bit. I'd love to see what this can do with better weights than vanilla SD.
Example: https://imgur.com/a/zypdU5D (Top image was run at CFG scale 10. Bottom image has SAG enabled; note how much cleaner the details are.)
Github: https://github.com/SusungHong/Self-Attention-Guidance
Paper: https://arxiv.org/abs/2210.00939
Huggingface demo: https://t.co/LnkSiwSCnx (They picked a weirdly low default CFG scale of 3, but details are still better even set at the max of 10)
Diffusors implementation: https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_sag.py
Beta Was this translation helpful? Give feedback.
All reactions