Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug]: Getting and error: runtimeerror: no hip gpus are available #16681

Open
1 of 6 tasks
Bassoopioka opened this issue Nov 25, 2024 · 0 comments
Open
1 of 6 tasks

[Bug]: Getting and error: runtimeerror: no hip gpus are available #16681

Bassoopioka opened this issue Nov 25, 2024 · 0 comments
Labels
bug-report Report of a bug, yet to be confirmed

Comments

@Bassoopioka
Copy link

Checklist

  • The issue exists after disabling all extensions
  • The issue exists on a clean installation of webui
  • The issue is caused by an extension, but I believe it is caused by a bug in the webui
  • The issue exists in the current version of the webui
  • The issue has not been reported before recently
  • The issue has been reported before but has not been fixed yet

What happened?

Im on a completely fresh install of Ubuntu 22.04.2. I followed the steps on Automatic Installation on Linux. I got WebUI open but it gives me an error: runtimeerror: no hip gpus are available. What am i doing wrong? What do i need to get this thing installed and working? My system is intel 9900k, 32gb ram and a radeon 6700XT. I am new to this stuff and don't know half of the stuff im doing so please be patient with me.

Steps to reproduce the problem

  1. open terminal in the folder i want to instal WebUI
  2. copied this to terminal:

sudo apt install git python3.10-venv -y
git clone https://github.com/AUTOMATIC1111/stable-diffusion-webui && cd stable-diffusion-webui
python3.10 -m venv venv

  1. Then i ran it with:

./webui.sh --upcast-sampling --skip-torch-cuda-test

  1. WebUI opens and when i try to generate an image it spits out:

error: runtimeerror: no hip gpus are available

What should have happened?

It should open the WebUI and use my GPU to generate images...

What browsers do you use to access the UI ?

Mozilla Firefox

Sysinfo

sysinfo-2024-11-25-14-11.json

Console logs

serwu@serwu-Z390-AORUS-MASTER:~/Desktop/Ai/stable-diffusion-webui$ ./webui.sh --upcast-sampling --skip-torch-cuda-test

################################################################
Install script for stable-diffusion + Web UI
Tested on Debian 11 (Bullseye), Fedora 34+ and openSUSE Leap 15.4 or newer.
################################################################

################################################################
Running on serwu user
################################################################

################################################################
Repo already cloned, using it as install directory
################################################################

################################################################
Create and activate python venv
################################################################

################################################################
Launching launch.py...
################################################################
glibc version is 2.35
Cannot locate TCMalloc. Do you have tcmalloc or google-perftool installed on your system? (improves CPU memory usage)
Python 3.10.12 (main, Nov  6 2024, 20:22:13) [GCC 11.4.0]
Version: v1.10.1
Commit hash: 82a973c04367123ae98bd9abdf80d9eda9b910e2
Launching Web UI with arguments: --upcast-sampling --skip-torch-cuda-test
/home/serwu/Desktop/Ai/stable-diffusion-webui/venv/lib/python3.10/site-packages/timm/models/layers/__init__.py:48: FutureWarning: Importing from timm.models.layers is deprecated, please import via timm.layers
  warnings.warn(f"Importing from {__name__} is deprecated, please import via timm.layers", FutureWarning)
no module 'xformers'. Processing without...
no module 'xformers'. Processing without...
No module 'xformers'. Proceeding without it.
Warning: caught exception 'No HIP GPUs are available', memory monitor disabled
Loading weights [6ce0161689] from /home/serwu/Desktop/Ai/stable-diffusion-webui/models/Stable-diffusion/v1-5-pruned-emaonly.safetensors
Running on local URL:  http://127.0.0.1:7860

To create a public link, set `share=True` in `launch()`.
Creating model from config: /home/serwu/Desktop/Ai/stable-diffusion-webui/configs/v1-inference.yaml
/home/serwu/Desktop/Ai/stable-diffusion-webui/venv/lib/python3.10/site-packages/huggingface_hub/file_download.py:797: FutureWarning: `resume_download` is deprecated and will be removed in version 1.0.0. Downloads always resume when possible. If you want to force a new download, use `force_download=True`.
  warnings.warn(
Startup time: 4.8s (import torch: 2.3s, import gradio: 0.5s, setup paths: 0.5s, other imports: 0.2s, load scripts: 0.3s, create ui: 0.3s, gradio launch: 0.5s).
Applying attention optimization: InvokeAI... done.
loading stable diffusion model: RuntimeError
Traceback (most recent call last):
  File "/usr/lib/python3.10/threading.py", line 973, in _bootstrap
    self._bootstrap_inner()
  File "/usr/lib/python3.10/threading.py", line 1016, in _bootstrap_inner
    self.run()
  File "/usr/lib/python3.10/threading.py", line 953, in run
    self._target(*self._args, **self._kwargs)
  File "/home/serwu/Desktop/Ai/stable-diffusion-webui/modules/initialize.py", line 149, in load_model
    shared.sd_model  # noqa: B018
  File "/home/serwu/Desktop/Ai/stable-diffusion-webui/modules/shared_items.py", line 175, in sd_model
    return modules.sd_models.model_data.get_sd_model()
  File "/home/serwu/Desktop/Ai/stable-diffusion-webui/modules/sd_models.py", line 693, in get_sd_model
    load_model()
  File "/home/serwu/Desktop/Ai/stable-diffusion-webui/modules/sd_models.py", line 868, in load_model
    with devices.autocast(), torch.no_grad():
  File "/home/serwu/Desktop/Ai/stable-diffusion-webui/modules/devices.py", line 228, in autocast
    if has_xpu() or has_mps() or cuda_no_autocast():
  File "/home/serwu/Desktop/Ai/stable-diffusion-webui/modules/devices.py", line 28, in cuda_no_autocast
    device_id = get_cuda_device_id()
  File "/home/serwu/Desktop/Ai/stable-diffusion-webui/modules/devices.py", line 40, in get_cuda_device_id
    ) or torch.cuda.current_device()
  File "/home/serwu/Desktop/Ai/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/cuda/__init__.py", line 778, in current_device
    _lazy_init()
  File "/home/serwu/Desktop/Ai/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/cuda/__init__.py", line 293, in _lazy_init
    torch._C._cuda_init()
RuntimeError: No HIP GPUs are available


Stable diffusion model failed to load
Using already loaded model v1-5-pruned-emaonly.safetensors [6ce0161689]: done in 0.0s
*** Error completing request
*** Arguments: ('task(y5cdfr3bjrgz0kp)', <gradio.routes.Request object at 0x7ff2024c1480>, 'woman', '', [], 1, 1, 7, 512, 512, False, 0.7, 2, 'Latent', 0, 0, 0, 'Use same checkpoint', 'Use same sampler', 'Use same scheduler', '', '', [], 0, 20, 'DPM++ 2M', 'Automatic', False, '', 0.8, -1, False, -1, 0, 0, 0, False, False, 'positive', 'comma', 0, False, False, 'start', '', 1, '', [], 0, '', [], 0, '', [], True, False, False, False, False, False, False, 0, False) {}
    Traceback (most recent call last):
      File "/home/serwu/Desktop/Ai/stable-diffusion-webui/modules/call_queue.py", line 74, in f
        res = list(func(*args, **kwargs))
      File "/home/serwu/Desktop/Ai/stable-diffusion-webui/modules/call_queue.py", line 53, in f
        res = func(*args, **kwargs)
      File "/home/serwu/Desktop/Ai/stable-diffusion-webui/modules/call_queue.py", line 37, in f
        res = func(*args, **kwargs)
      File "/home/serwu/Desktop/Ai/stable-diffusion-webui/modules/txt2img.py", line 109, in txt2img
        processed = processing.process_images(p)
      File "/home/serwu/Desktop/Ai/stable-diffusion-webui/modules/processing.py", line 847, in process_images
        res = process_images_inner(p)
      File "/home/serwu/Desktop/Ai/stable-diffusion-webui/modules/processing.py", line 920, in process_images_inner
        with devices.autocast():
      File "/home/serwu/Desktop/Ai/stable-diffusion-webui/modules/devices.py", line 228, in autocast
        if has_xpu() or has_mps() or cuda_no_autocast():
      File "/home/serwu/Desktop/Ai/stable-diffusion-webui/modules/devices.py", line 28, in cuda_no_autocast
        device_id = get_cuda_device_id()
      File "/home/serwu/Desktop/Ai/stable-diffusion-webui/modules/devices.py", line 40, in get_cuda_device_id
        ) or torch.cuda.current_device()
      File "/home/serwu/Desktop/Ai/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/cuda/__init__.py", line 778, in current_device
        _lazy_init()
      File "/home/serwu/Desktop/Ai/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/cuda/__init__.py", line 293, in _lazy_init
        torch._C._cuda_init()
    RuntimeError: No HIP GPUs are available

---

Additional information

No response

@Bassoopioka Bassoopioka added the bug-report Report of a bug, yet to be confirmed label Nov 25, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug-report Report of a bug, yet to be confirmed
Projects
None yet
Development

No branches or pull requests

1 participant