Skip to content

This project sets up an AI stack at home for various machine learning and AI tasks, including Open Web UI, Ollama, Stable Diffusion, and Whisper services. The stack leverages Docker for containerization and Tailscale for secure access.

Notifications You must be signed in to change notification settings

cwilliams001/ai-stack

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

8 Commits
 
 
 
 
 
 
 
 

Repository files navigation

AI-Stack

This project sets up an AI stack at home for various machine learning and AI tasks, including Open Web UI, Ollama, Stable Diffusion, and Pipeline services. The stack leverages Docker for containerization and Tailscale for secure access.

Table of Contents

  1. Prerequisites
  2. Folder Structure
  3. Setup
  4. Services
  5. Usage
  6. Environment Variables
  7. Ports
  8. Troubleshooting

Prerequisites

Before you begin, ensure you have:

Folder Structure

The folder structure of the project is as follows:

ai-stack
|── config
|   └── open-webui.json
|── state
├── .env
├── docker-compose.yaml
├── ollama
├── open-webui
├── stable-diffusion-webui-docker
  • .env: Environment variables file.
  • docker-compose.yaml: Docker Compose configuration file.
  • ollama, open-webui, stable-diffusion-webui-docker: Directories for respective services and configurations.

Setup

1. Clone the Repository

git clone https://github.com/cwilliams001/ai-stack.git
cd ai-stack

2. Configure Environment Variables

Create a .env file in the root directory if it does not exist already:

touch .env

Populate the .env file with the necessary environment variables:

PUID=1000
PGID=1000
BRAVE_SEARCH_API_KEY=your-brave-api-key
TS_AUTHKEY=your-tailscale-auth-key

3. Changes for ComfyUI

Before starting the stack, in the ai-stack directory, you’ll want to clone the repo (or just copy the necessary files). (this will create the folder for you)

git clone https://github.com/AbdBarho/stable-diffusion-webui-docker.git

After cloning you'll want to make a change to the Dockerfile

nano stable-diffusion-webui-docker/services.comfy/Dockerfile

I commented out the pinning to commit hash and just grabbed the latest comfy

FROM pytorch/pytorch:2.3.0-cuda12.1-cudnn8-runtime

ENV DEBIAN_FRONTEND=noninteractive PIP_PREFER_BINARY=1

RUN apt-get update && apt-get install -y git && apt-get clean

ENV ROOT=/stable-diffusion
RUN --mount=type=cache,target=/root/.cache/pip \
  git clone https://github.com/comfyanonymous/ComfyUI.git ${ROOT} && \
  cd ${ROOT} && \
  git checkout master && \
#  git reset --hard 276f8fce9f5a80b500947fb5745a4dde9e84622d && \
  pip install -r requirements.txt

WORKDIR ${ROOT}
COPY . /docker/
RUN chmod u+x /docker/entrypoint.sh && cp /docker/extra_model_paths.yaml ${ROOT}

ENV NVIDIA_VISIBLE_DEVICES=all PYTHONPATH="${PYTHONPATH}:${PWD}" CLI_ARGS=""
EXPOSE 7860
ENTRYPOINT ["/docker/entrypoint.sh"]
CMD python -u main.py --listen --port 7860 ${CLI_ARGS}

Downloading Models

You’ll want to grab any models you like from HuggingFace. I am using stabilityai/stable-diffusion-3-medium

You’ll want to download all of the models and then transfer them to your server and put them in the appropriate folders

Models will need to bt placed in the Stable-diffusion folder.

stable-diffusion-webui-docker/data/models/Stable-diffusion

Models are any file in the root of stable-diffusion-3-medium that have the extension *.safetensors

For clips, you’ll need to create this folder (because it doesn’t exist)

mkdir stable-diffusion-webui-docker/data/models/CLIPEncoder

Example Workflows for ComfyUI and Stable Diffusion 3 Medium

You’ll need to download the same workflows to the machine that accesses ComfyUI so you can import them into the browser.

Example workflows are also available on HuggingFace in the Stable Diffusion 3 Medium repo

This should show up as a service on your tailnet port 7860

4. Pipelines Setup

Connect to Open WebUI

Navigate to the Settings > Connections > OpenAI API section in Open WebUI.

Set the API URL to http://localhost:9099 and the API key to 0p3n-w3bu!.

If your Open WebUI is running in a Docker container, replace localhost with host.docker.internal in the API URL.

Manage Configurations

Go to the Admin Settings > Pipelines tab.

Select your desired pipeline and modify the valve values directly from the WebUI.

Add Anthropic Manifold Pipeline Plugin

Add the plugin from anthropic_manifold_pipeline.py to your pipelines directory.

5. Run Docker Compose

docker compose up -d --build --force-recreate --remove-orphans

This will start all services defined in the docker-compose.yaml file.

Services

1. Tailscale - ts-open-webui

Provides a secure VPN connection to access services.

2. Ollama - ollama

A general-purpose AI service.

3. Open Web UI - open-webui

A user interface to interact with various AI-related tasks.

4. Stable Diffusion Web UI - stable-diffusion-webui

Provides image generation capabilities.

5. Pipeline Services

Provides ability to add other API endpoints for AI services.

Usage

  1. Accessing Services

    Services are accessed securely through Tailscale. Make sure your device is connected to the Tailscale network with the appropriate tags.

  2. Interacting with the AI Stack

    Each service exposed will be available through their respective endpoints. Check the output of docker-compose up -d to see which ports are forwarded through Tailscale.

Environment Variables

Refer to the .env file for configuring environment-specific variables:

  • PUID: User ID for permissions.
  • PGID: Group ID for permissions.
  • BRAVE_SEARCH_API_KEY: API key for Brave web search.
  • TS_AUTHKEY: Auth key for Tailscale.

Ports

The services are networked through Tailscale and do not expose ports directly to your local machine. Ensure your Tailscale configuration allows access to these services.

Troubleshooting

  1. Logs

    Check logs for each service to debug issues:

    docker logs <container_name>
  2. Connectivity Issues

    Ensure your device is properly connected to the Tailscale network.

  3. Docker Compose

    If you encounter any issues starting the containers, you can bring down the stack and bring it up again:

    docker-compose down
    docker-compose up -d

Resources

About

This project sets up an AI stack at home for various machine learning and AI tasks, including Open Web UI, Ollama, Stable Diffusion, and Whisper services. The stack leverages Docker for containerization and Tailscale for secure access.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published