Skip to content
Permalink

Comparing changes

Choose two branches to see what’s changed or to start a new pull request. If you need to, you can also or learn more about diff comparisons.

Open a pull request

Create a new pull request by comparing changes across two branches. If you need to, you can also . Learn more about diff comparisons here.
base repository: amanbasu/Autonomous-Car-Prototype
Failed to load repositories. Confirm that selected base ref is valid, then try again.
Loading
base: master
Choose a base ref
...
head repository: SeanFrohman/Autonomous-Car-Prototype
Failed to load repositories. Confirm that selected head ref is valid, then try again.
Loading
compare: master
Choose a head ref
Able to merge. These branches can be automatically merged.
  • 1 commit
  • 1 file changed
  • 1 contributor

Commits on Dec 24, 2024

  1. Created using Colab

    SeanFrohman committed Dec 24, 2024
    Copy the full SHA
    bc5d81e View commit details
Showing with 321 additions and 0 deletions.
  1. +321 −0 Invokeai_in_google_colab.ipynb
321 changes: 321 additions & 0 deletions Invokeai_in_google_colab.ipynb
Original file line number Diff line number Diff line change
@@ -0,0 +1,321 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {
"id": "view-in-github",
"colab_type": "text"
},
"source": [
"<a href=\"https://colab.research.google.com/github/SeanFrohman/Autonomous-Car-Prototype/blob/master/Invokeai_in_google_colab.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "jy86JeXPHQmW"
},
"source": [
"# InvokeAI in Google Colab\n",
"\n",
"###Introduction\n",
"\n",
"This is a tool to use Google colab to run the AI image generation tool: Invokeai (https://invoke-ai.github.io/InvokeAI/).\n",
"This automatically builds itself, It can connect to Google drive to save your images.\n",
"It also has the option of running from Google Drive, This takes about 2GB + Models of Google Drive space.\n",
"\n",
"Make sure to enable GPU This should be on by default, but the setting can be found in the menu under: Edit > Notebook Settings > Hardware accelerator > GPU\n",
"\n",
"To start, Click \"Runtime\" > \"Run All\". Alternaivly you can click the \"play\" button on each step below one after the other, No need to wait for the previous steps to finish as they will join a queue.\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": true,
"id": "cEAEBY2sFdGR"
},
"outputs": [],
"source": [
"#@title 1. Configuration { display-mode: \"form\" }\n",
"#@markdown #Instance Type\n",
"#@markdown **Google_Drive** = Stores ALL files in your Google drive, the space it takes up is hevily based on the type and number of models that you are using. but will start up much faster as it does not download InvokeAI at each run. If you have issues with this, please use the \"Rebuild\" feature in advanced settings, or alternativly delete the whole install directory. <br>\n",
"#@markdown **Temporary** (NOT recomended) = Everything is stored in the runtime and is removed when the runtime ends or crashes, make sure to download your images! <br>\n",
"Type = \"Google_Drive\" #@param ['Google_Drive','Temporary'] {type:\"string\"}\n",
"#@markdown <br>\n",
"#@markdown **Rough Startup time:** <br>\n",
"#@markdown It takes about 5-10 mins to start up, Models are downloaded in the model manager, and can take 2-5 mins each.\n",
"\n",
"\n",
"#@markdown ---\n",
"\n",
"#@markdown #Connection Type.\n",
"#@markdown **NGROK**: (Recomended) Highly stable but needs a little setting up, An NGROK token is required, sign up for free and get one here: https://dashboard.ngrok.com/get-started/your-authtoken Once you have the token, please put it in below.<br>\n",
"#@markdown **NGROK_APT**: An aternate version of NGROK that runs as a Linux service rather than a python service.<br>\n",
"#@markdown **Localtunnel**: Stable once connected, but sometimes has issues.<br>\n",
"#@markdown **Serveo**: less stable, requires no setup, an alternative to Localtunnel.<br>\n",
"connection_type = \"Serveo\" #@param [\"Serveo\",\"Localtunnel\",\"NGROK\",\"NGROK_APT\"]\n",
"ngrok_token = \"ak_2gWE2gsKXsxfxRAoML0ouXbdCGA\" #@param ['None'] {allow-input: true}\n"
]
},
{
"cell_type": "code",
"source": [
"from google.colab import drive\n",
"drive.mount('/content/drive')"
],
"metadata": {
"id": "2_xvDMveBS3x"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"source": [
"#@title 2. Models { display-mode: \"form\" }\n",
"\n",
"#@markdown ###There has been a huge overhaul of model management in InvokeAI V4 and newer.\n",
"#@markdown ###All model mangement is now done in-app, the \"Model Manager\" is found on the left hand side. <br />\n",
"#@markdown If you are using Temprory Mode, would you like to Mount Google Drive to import models from?\n",
"GDrive_Import = \"Yes\" #@param [\"Yes\",\"No\"]\n",
"\n",
"#@markdown The path to the root of your Google drive will be added as: /content/drive/MyDrive/invokeai You will use that in the \"Model manager\" to import models."
],
"metadata": {
"id": "g9611wcnE3e6"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"source": [
"#@title 3. Advanced Options { display-mode: \"form\" }\n",
"\n",
"#@markdown ##Version\n",
"#@markdown By default this application uses \"The latest official release of InvokeAI\" however if you wish to use a specific version please enter its version code below.<br>\n",
"#@markdown Please be aware, If you are using a \"Google_Drive\" version of this program. DOWNGRADING is not officially supported. It may work, it may not, If this does break InvokeAI please DELETE the whole folder /Google_Drive/InvokeAI/invokeai from https://drive.google.com/, If required please backup your \"Autoimport\" folder. <br>\n",
"#@markdown Some older versions can be located in the dropdown. Unless you have a VERY GOOD reason to, I would always use the \"Default\" mode, but any version 3.0.2a1 or newer should work.\n",
"Version = \"Default\" #@param [\"Default\",\"5.1.0\",\"4.2.9\",\"4.0.2\",\"3.2.0\",\"3.1.0\",\"3.0.2.post1\",\"3.0.2a1\"] {allow-input: true}\n",
"\n",
"#@markdown ##Rebuild\n",
"#@markdown If you are having any issues with a \"Google Drive\" install of InvokeAI but want to keep your imported models and settings, set this to \"Yes\" to attempt a repair of the app. <br>\n",
"#@markdown Additionally some settings are ignored on subsiquent runs, use this to run those steps again. <br>\n",
"\n",
"Rebuild = \"No\" #@param [\"Yes\",\"No\"]\n",
"\n",
"#@markdown ---\n",
"\n",
"#@markdown #Model training.<br />\n",
"#@markdown \"hollowstrawberry\" has an amazing Google colab LoRA maker, it is 100X Better than I could do! It can be found here:<br />\n",
"#@markdown Dataset Maker - https://colab.research.google.com/github/hollowstrawberry/kohya-colab/blob/main/Dataset_Maker.ipynb <br />\n",
"#@markdown LoRA Trainer - https://colab.research.google.com/github/hollowstrawberry/kohya-colab/blob/main/Lora_Trainer.ipynb"
],
"metadata": {
"id": "6WDsQHKoEj7J"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"source": [
"#@title 4. Build and Configure. { display-mode: \"form\" }\n",
"#@markdown Install InvokeAI and it's dependencies, This takes around 6-8 mins.\n",
"\n",
"\n",
"from IPython.display import clear_output\n",
"\n",
"#Set up temporary storage if running in \"Temporary\" mode.\n",
"if Type == \"Temporary\":\n",
" file_path = '/content/invokeai'\n",
" noUpdate = '/content/invokeai/noUpdate'\n",
" import os\n",
" if not os.path.exists(file_path):\n",
" os.makedirs(file_path)\n",
"\n",
"#Mount google drive if requested and required.\n",
" if GDrive_Import == \"Yes\":\n",
" import os\n",
" from google.colab import drive\n",
" if not os.path.exists('/content/drive/'):\n",
" drive.mount('/content/drive')\n",
"\n",
"\n",
"# Mount and set up Google drive if running in \"Google_Drive\" mode.\n",
"if Type == \"Google_Drive\":\n",
" file_path = '/content/drive/MyDrive/invokeai/invokeaiapp'\n",
" noUpdate = '/content/drive/MyDrive/invokeai/noUpdate'\n",
"\n",
" import os\n",
" from google.colab import drive\n",
" if not os.path.exists(file_path):\n",
" drive.mount('/content/drive')\n",
" if not os.path.exists(file_path):\n",
" os.makedirs(file_path)\n",
"\n",
"\n",
"#Action the rebuild flag.\n",
"if Rebuild == \"Yes\":\n",
" !sudo rm -R {noUpdate}\n",
"\n",
"%cd {file_path}\n",
"\n",
"\n",
"#Update pip\n",
"%cd {file_path}\n",
"!curl https://bootstrap.pypa.io/get-pip.py -o get-pip.py\n",
"!python -m pip install --upgrade pip\n",
"\n",
"#Install a dependency\n",
"!apt install python3.10-venv\n",
"\n",
"\n",
"\n",
"#Create InvokeAI root\n",
"import os\n",
"os.environ['INVOKEAI_ROOT'] = file_path\n",
"if not os.path.exists(file_path):\n",
" os.makedirs(invokeai_root)\n",
"\n",
"#Create the virtual environment + Download default Models\n",
"%cd {file_path}\n",
"\n",
"#On 2nd run, Do an \"upgrade\" to get system variables to load quickly.\n",
"if os.path.exists(noUpdate):\n",
" !python -m venv .venv --prompt InvokeAI\n",
" if Version==\"Default\":\n",
" !source .venv/bin/activate; python -m pip install InvokeAI[xformers] --use-pep517 --extra-index-url https://download.pytorch.org/whl/cu121 --upgrade\n",
" if Version != \"Default\":\n",
" !source .venv/bin/activate; python -m pip install InvokeAI[xformers]=={Version} --use-pep517 --extra-index-url https://download.pytorch.org/whl/cu121 --upgrade\n",
"\n",
"\n",
"#PIP First time install of InvokeAI.\n",
"if not os.path.exists(noUpdate):\n",
" !python -m venv .venv --prompt InvokeAI\n",
" !source .venv/bin/activate; python -m pip install --upgrade pip\n",
"\n",
" if Version==\"Default\":\n",
" !source .venv/bin/activate; python -m pip install InvokeAI[xformers] --use-pep517 --extra-index-url https://download.pytorch.org/whl/cu121\n",
" if Version != \"Default\":\n",
" !source .venv/bin/activate; python -m pip install InvokeAI[xformers]=={Version} --use-pep517 --extra-index-url https://download.pytorch.org/whl/cu121\n",
"\n",
" !mkdir {noUpdate}\n",
"\n",
"\n",
"# Edit invokeai.yaml\n",
"\n",
"#Adjust Ram Cache\n",
"#!sed -i 's/ram: 7.5/ram: 10.0/' invokeai.yaml\n",
"#!sed -i 's/vram: 0.25/vram: 10.0/' invokeai.yaml\n",
"\n",
"%cd {file_path}\n",
"\n",
"#POST install adjustments\n",
"#!pip install requests==2.32.3\n",
"\n",
"#Clear Output\n",
"clear_output()"
],
"metadata": {
"id": "NO3XyDPsTJ2R"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "LnO01U-W6Yjp"
},
"outputs": [],
"source": [
"#@title 5. Start InvokeAI. { display-mode: \"form\" }\n",
"#@markdown ## Starting the App\n",
"#@markdown This step takes about 15 seconds to generate your URL but 30 seconds after it is launched before it will work fully! <br>\n",
"\n",
"#@markdown ## Notes about connection types.\n",
"#@markdown **NGROK** = Very stable but requires a token, see the \"configuration\" step for more details.<br>\n",
"#@markdown **Localtunnel** = Once it gets going it is quite stable but often has \"502\" Errors, You must wait for THEM to fix it, please try another connection type.<br>\n",
"#@markdown **Serveo** = Almost, always connects however, it can drop your connection if a HTML error occurs, Simply wait for the images to finish generating, stop an re-start this step.\n",
"\n",
"\n",
"\n",
"%cd {file_path}\n",
"import os\n",
"\n",
"if connection_type == \"Serveo\":\n",
" !ssh -o StrictHostKeyChecking=no -o ServerAliveInterval=60 -R 80:localhost:9090 serveo.net & . {file_path}/.venv/bin/activate; invokeai-web\n",
"\n",
"if connection_type == \"Localtunnel\":\n",
" print(\"How to connect to localtunnel:\");\n",
" print(\"A localtunnel Interface connection is generated here, To use this, please do the following \")\n",
" print(\"1. Copy this IP address\")\n",
" !curl ipv4.icanhazip.com\n",
" print(\"2. Click the random 'https://XXX-YYY-ZZZ.loca.lt' link that is generated below.\")\n",
" print(\"3. Paste the IP into the provided box and submit. \")\n",
" print(\" \")\n",
" print(\"Note: An error of '502 Bad Gateway' typically is an error at Localtunnels end. A '504 Gateway Time-out' Error means invokeai has not started yet.\")\n",
" print(\" \")\n",
" !npm install -g localtunnel\n",
" !npx localtunnel --port 9090 & . {file_path}/.venv/bin/activate; invokeai-web\n",
"\n",
"if connection_type == \"NGROK\":\n",
" if ngrok_token == \"None\":\n",
" print(\"You have Selected NGROK but did not supply an NGROK token.\")\n",
" print(\"Falling back to a 'Serveo' connection type.\")\n",
" print(\"Please either add an NGROK token to step 1, re-run step 1, then re-run this step, or just re-run this step to use 'Servio'.\")\n",
" connection_type = \"Serveo\"\n",
" if ngrok_token != \"None\":\n",
" !pip install pyngrok --quiet\n",
" from pyngrok import ngrok\n",
" ngrok.kill()\n",
" ngrok.set_auth_token(ngrok_token)\n",
" public_url = ngrok.connect(9090).public_url\n",
" print(f'InvokeAI Public URL: {public_url}')\n",
" ! . {file_path}/.venv/bin/activate; invokeai-web\n",
"\n",
"#NGROK_APT\n",
"if connection_type == \"NGROK_APT\":\n",
" if ngrok_token == \"None\":\n",
" print(\"You have Selected NGROK but did not supply an NGROK token.\")\n",
" print(\"Falling back to a 'Serveo' connection type.\")\n",
" print(\"Please either add an NGROK token to step 1, re-run step 1, then re-run this step, or just re-run this step to use 'Servio'.\")\n",
" connection_type = \"Serveo\"\n",
" if ngrok_token != \"None\":\n",
" !curl -sSL https://ngrok-agent.s3.amazonaws.com/ngrok.asc | sudo tee /etc/apt/trusted.gpg.d/ngrok.asc >/dev/null\n",
" !echo \"deb https://ngrok-agent.s3.amazonaws.com buster main\" | sudo tee /etc/apt/sources.list.d/ngrok.list\n",
" !sudo apt update\n",
" !sudo apt install ngrok\n",
" !ngrok config add-authtoken {ngrok_token}\n",
" clear_output()\n",
" !echo \"You can find the connection URL here in the NGROK portal:\"\n",
" !echo \"https://dashboard.ngrok.com/endpoints\"\n",
" !nohup ngrok http http://localhost:9090 &\n",
" ! . {file_path}/.venv/bin/activate; invokeai-web"
]
}
],
"metadata": {
"colab": {
"private_outputs": true,
"provenance": [],
"gpuType": "A100",
"machine_shape": "hm",
"include_colab_link": true
},
"kernelspec": {
"display_name": "Python 3",
"name": "python3"
},
"language_info": {
"name": "python"
},
"accelerator": "GPU"
},
"nbformat": 4,
"nbformat_minor": 0
}