Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug]: Multiple IPAdapter with face_id_plus fails with Exception: Insightface: No face found in image #2627

Open
1 task done
jameslanman opened this issue Feb 7, 2024 · 9 comments

Comments

@jameslanman
Copy link

Is there an existing issue for this?

  • I have searched the existing issues and checked the recent builds/commits of both this extension and the webui

What happened?

I tried to run multiple IPadapters for inpainting a specific face and any scenario where multiple IPadapters for face are selected fails. This includes using multiple inputs or the batch directory option within controlnet for IPadapter.

Steps to reproduce the problem

  1. I loaded up a single IP adapter in controlnet with the intention of inpainting a face in img2img with the pre-processor "ip-adapter_face_id_plus" and model "ip-adapter-faceid-plusv2_sd15 [6e14fc1a]"
  2. I did not select "Crop input image based on A1111 mask" because selecting it fails on the first module even if it works on a second controlnet module. I mention this because I believe it is related.
  3. Run "generate" and it inpaints the face with the reference image normally. Then load up a second IP adapter in controlnet with the exact same settings but I select "Crop input image based on A1111 mask" on the second controlnet module otherwise the results are wonky
  4. I get this error in terminal: Error running before_process_batch: /Users/username/stable-diffusion-webui/extensions/sd-webui-controlnet/scripts/controlnet.py
  5. I delete second IP adapter controlnet module and it works fine.

What should have happened?

I would think the normal behavior would be to select "Crop input image based on A1111 mask" when inpainting and be able to use an IPadapter on just the section you intend to inpaint regardless of the the order the specific controlnet module is used. Secondly, I would also expect as is the case in Comfyui, that you could combine multiple Ipadapters for faces to get closer to a target likeness.

Commit where the problem happens

webui: version: v1.7.0  •  python: 3.10.13  •  torch: 2.2.0.dev20231025  •  xformers: N/A  •  gradio: 3.41.2  

controlnet: v1.1.440

What browsers do you use to access the UI ?

Google Chrome

Command Line Arguments

Google Chrome

List of enabled extensions

Screenshot 2024-02-06 at 8 16 32 PM

Console logs

Applying attention optimization: sub-quadratic... done.
Weights loaded in 3.2s (send model to cpu: 0.5s, load weights from disk: 0.2s, apply weights to model: 2.0s, move model to device: 0.5s).
2024-02-06 20:08:17,185 - ControlNet - INFO - unit_separate = False, style_align = False
2024-02-06 20:08:17,400 - ControlNet - INFO - Loading model: ip-adapter-faceid-plusv2_sd15 [6e14fc1a]
2024-02-06 20:08:17,432 - ControlNet - INFO - Loaded state_dict from [/Users/username/stable-diffusion-webui/extensions/sd-webui-controlnet/models/ip-adapter-faceid-plusv2_sd15.bin]
2024-02-06 20:08:17,432 - ControlNet - INFO - IP-Adapter faceid plus v2 detected.
2024-02-06 20:08:17,694 - ControlNet - INFO - ControlNet model ip-adapter-faceid-plusv2_sd15 [6e14fc1a](ControlModelType.IPAdapter) loaded.
2024-02-06 20:08:17,695 - ControlNet - INFO - Using preprocessor: ip-adapter_face_id_plus
2024-02-06 20:08:17,695 - ControlNet - INFO - preprocessor resolution = 768
2024-02-06 20:08:17,696 - ControlNet - INFO - Loading model from cache: ip-adapter-faceid-plusv2_sd15 [6e14fc1a]
2024-02-06 20:08:20,935 - ControlNet - INFO - Using preprocessor: ip-adapter_face_id_plus
2024-02-06 20:08:20,935 - ControlNet - INFO - preprocessor resolution = 768
*** Error running before_process_batch: /Users/username/stable-diffusion-webui/extensions/sd-webui-controlnet/scripts/controlnet.py
    Traceback (most recent call last):
      File "/Users/username/stable-diffusion-webui/modules/scripts.py", line 726, in before_process_batch
        script.before_process_batch(p, *script_args, **kwargs)
      File "/Users/username/stable-diffusion-webui/extensions/sd-webui-controlnet/scripts/controlnet.py", line 1153, in before_process_batch
        self.controlnet_hack(p)
      File "/Users/username/stable-diffusion-webui/extensions/sd-webui-controlnet/scripts/controlnet.py", line 1128, in controlnet_hack
        self.controlnet_main_entry(p)
      File "/Users/username/stable-diffusion-webui/extensions/sd-webui-controlnet/scripts/controlnet.py", line 969, in controlnet_main_entry
        controls, hr_controls = list(zip(*[preprocess_input_image(img) for img in input_images]))
      File "/Users/username/stable-diffusion-webui/extensions/sd-webui-controlnet/scripts/controlnet.py", line 969, in <listcomp>
        controls, hr_controls = list(zip(*[preprocess_input_image(img) for img in input_images]))
      File "/Users/username/stable-diffusion-webui/extensions/sd-webui-controlnet/scripts/controlnet.py", line 936, in preprocess_input_image
        detected_map, is_image = self.preprocessor[unit.module](
      File "/Users/username/stable-diffusion-webui/extensions/sd-webui-controlnet/scripts/utils.py", line 80, in decorated_func
        return cached_func(*args, **kwargs)
      File "/Users/username/stable-diffusion-webui/extensions/sd-webui-controlnet/scripts/utils.py", line 64, in cached_func
        return func(*args, **kwargs)
      File "/Users/username/stable-diffusion-webui/extensions/sd-webui-controlnet/scripts/global_state.py", line 37, in unified_preprocessor
        return preprocessor_modules[preprocessor_name](*args, **kwargs)
      File "/Users/username/stable-diffusion-webui/extensions/sd-webui-controlnet/scripts/processor.py", line 830, in face_id_plus
        face_embed, _ = g_insight_face_model.run_model(img)
      File "/Users/username/stable-diffusion-webui/extensions/sd-webui-controlnet/scripts/processor.py", line 753, in run_model
        raise Exception(f"Insightface: No face found in image.")
    Exception: Insightface: No face found in image.

Additional information

Running locally on a M1 Max, 32GB on Ventura 13.3.1 (a)

@ringlog
Copy link

ringlog commented Feb 22, 2024

the same problem.

@sitzbrau
Copy link

me too.

@rohitpaul23
Copy link

Same here. Is there any dependency related issue, cause I tried zoomed out face image as suggested and so for one machine its working fine whereas same image on another machine says 'No face found in Image'

@Zergland
Copy link

I also encountered this error.

@zlobs
Copy link

zlobs commented Mar 21, 2024

Don't give FaceIDv2 too zoomed pictures or pictures where the face takes most of the place. Upper half of body is good enough

@holytony
Copy link

Don't give FaceIDv2 too zoomed pictures or pictures where the face takes most of the place. Upper half of body is good enough

thanks, this solved my problem

@seancheung
Copy link
Contributor

seancheung commented May 13, 2024

In txt2img mode, might be related to #2881
In img2img mode, disable Crop input image based on A1111 mask (it's enabled by default based on inpainting settings)

@jojowan87
Copy link

thanks, this solved my problem
me too

@jkyndir
Copy link

jkyndir commented Dec 5, 2024

In txt2img mode, might be related to #2881 In img2img mode, disable Crop input image based on A1111 mask (it's enabled by default based on inpainting settings)

Hi there, why is there no checkbox to disable Crop input image based on A1111 mask in img2img?
I can disable it in the inpaint mode, and the face in the uploaded reference image will be picked up by Insightface as expected.
But I can't do so in the img2img mode, there is no such checkbox.

See the bug I just reported here: #3071

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

10 participants