This project is a collection of applications leveraging Large Language Models (LLMs) for various tasks such as chatbots, document summarization, and retrieval-augmented generation (RAG). The applications are built using LangChain and LangGraph for the backend processing and Streamlit for the UI.
- Support for multiple LLM providers (
OpenAI
andGroq
) - Model selection for each provider:
OpenAI
:gpt-4o-mini
,gpt-4-turbo
,gpt-4o
Groq
:llama3-70b-8192
,llama3-8b-8192
- Message history with auto-trimming
- Audio capabilities:
- Voice input using OpenAI's
Whisper
for speech-to-text - Text-to-speech responses using OpenAI's
TTS
- Automatic audio playback of responses
- Voice input using OpenAI's
- Two summarization techniques:
Stuff
: For shorter documents that fit within context windowMap-Reduce
: For longer documents that exceed context limits
- PDF document support
- Agentic RAG: QA with Memory
- Tool-calling: Tool calling enables the model to decide if a retrieval step is needed. If required, user queries are rewritten based on the chat history (contextualization). If not, the model responds directly without a retrieval step (e.g., in response to a generic greeting).
FAISS
vector store for efficient similarity searchOpenAI
embeddings integration
- Built with
Streamlit
for interactive UI LangChain
andLangGraph
integration for LLM operations- Modular architecture with separate pages
- Authentication via API key or password
- Clone the repository:
git clone https://github.com/yourusername/llm-app-collection.git
cd llm-app-suite
- Configure API keys in
.streamlit/secrets.toml
file:
OPENAI_API_KEY_DEV = "your-openai-key"
GROQ_API_KEY_DEV = "your-groq-key"
PASSWORDS = ["your-password"]
- Create and activate virtual environment:
Ubuntu/Linux
pip install virtualenv
virtualenv .venv
source .venv/bin/activate
Windows:
pip install virtualenv
virtualenv venv
venv\Scripts\activate.bat
- Install dependencies:
pip install -r requirements.txt
- Run the application:
streamlit run app.py