Skip to content

💁 Example code for a blog post series about using a RAG system on a local codebase.

License

Notifications You must be signed in to change notification settings

davenforce/rag_time

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

3 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

rag_time

Simple Chat UI to chat about a private codebase using LLMs locally.

Technology:

Getting started

Prerequisites

  1. Make sure you have Python 3.9 or later installed

  2. Download and install Ollama

  3. Pull the model:

    ollama pull llama:3b

Run the Chat bot

  1. Create a Python virtual environment and activate it:

    python3 -m venv .venv && source .venv/bin/activate
  2. Install Python dependencies:

    pip install -r requirements.txt
  3. Clone an example repository to question the chat bot about:

    git clone https://github.com/discourse/discourse
  4. Set up the vector database:

    python ingest-code.py
  5. Start the chat bot:

    chainlit run main.py
  6. To exit the Python virtual environment after you are done, run:

    deactivate

Make it your own

Modify the .env file to run the chat bot on your codebase and language.

About

💁 Example code for a blog post series about using a RAG system on a local codebase.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 97.2%
  • Shell 2.8%