A simple streamlit app that does RAG
-
Setup your environment secrets, refer to .streamlit/secrets.toml.example
-
Install the dependencies
$ cd src $ pip install -r src/requirements.txt
-
Run the app
$ streamlit run src/Chat.py
Disclaimer: This backend server is strictly for demonstrating a proof-of-concept for running LLMs on modal labs via ollama, you are encouraged to support Modal labs via their payed services for a better experience, support, and system scalability
-
Setup an account on modal labs
modal labs sign-up page -
Pull this repo and redirect to the project directory using the terminal// command line interface
$ git clone https://github.com/ProtoFaze/chatbot.git $ cd chatbot
-
Install modal lab's python library using pip.
$ pip install modal
-
Follow the instructions in your browser to authenticate your device for accessing your modal account
$ modal setup
-
Change your directory again to the backend codebase and deploy the server
$ cd ollama_backend $ modal deploy ollama_backend.py
Your terminal / command line interface should show a url to access the server
-
Change the endpoint
- In your frontend, navigate to the settings page, and fill in the previous url into the ollama endpoint field and submit,
- or just change the environment variable or secrets that your frontend accesses for ollama endpoints.
(uses.toml
by default, feel free to refactor to use.env
)
-
Try running your own server in the chat page now.
This is a step by step development of an insurance salesman chatbot proof-of-concept,
The chatbot should be able to access the corpus of data related to a product it is advertising,
The POC is set up using open source and free tier options, no free trials from providers required.
POLM
Python version 3.12
Ollama for edge device language model hosting
LlamaIndex for parsing and ingestion
Modal labs for provisioning computes to develop and test with ASGI web endpoints and llm inference
MongoDB Atlas for data storage
Streamlit for user interface
Retrieval Augmented Generation (RAG)
Structured LLM output
Few-shot prompting
- pdf processing workflow (via notebook)
- structured data corpus fetch
- structured outputs via json schema
- basic chatlog analysis
Thanks for sharing these demos and blogs:
Using streamlit with Ollama for prototyping
demo of streamlit with ollama
Using llamaparse with Ollama
the repo
the blog article
Integrating Llamaparse vector indexes with Mongodb
How to Build a RAG System With LlamaIndex, OpenAI, and MongoDB Vector Database
Using Ollama as a freemium backend service
run ollama with modal
Setting up response streaming via fastAPI(compatible with modal labs)
FastAPI Streaming Response: Error: Did not receive done or success response in stream
Using structured outputs on Ollama
Structured outputs
Controling page visibility on Streamlit
Hide/show pages in multipage app based on conditions