Skip to content

JornsenChao/CRAG4LAUD

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

48 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Your local RAG chatbot

CRAG4LAUD

CRAG4LAUD (Chatbot RAG for landscape architects, arcitects, and urban planners) is a simple demo project showcasing Retrieval-Augmented Generation (RAG), allowing you to upload PDF/CSV/XLSX files, build vector embeddings, and ask questions with context retrieved from those files. It also includes a ProRAG module to demonstrate table-based strategies and their dependencies, particularly useful for architecture, landscape architecture, and urban planning contexts.

Features

Technology Stack

Getting Started

1. Clone the Repository

git clone https://github.com/JornsenChao/CRAG4LAUD.git
cd CRAG4LAUD

2. Install Dependencies

To run this locally, you will run both the server and the frontend, each using different port.

1. install at project folder

npm install

2. install the server

cd server
npm install

3. run the application, both frontend and server

cd ..
npm run dev

This will install both frontend and server dependencies specified in the root package.json.

Note: The server folder also has its own package.json for backend dependencies. If needed, run cd server && npm install in there as well—but typically this project structure is set up to install everything from the root.

3. Environment Variables

Create a .env file in the root or in the server directory for storing sensitive keys (e.g., your_api_key). Example:

OPENAI_API_KEY=sk-xxx
OPENAI_MODEL=gpt-3.5-turbo
HUGGINGFACE_API_KEY = hf_xxx
HUGGINGFACE_MODEL = 'deepseek-ai/DeepSeek-V3-Base'

Make sure you add .env to your .gitignore and never commit real API keys.

4. Run in Development

Use the dev script to start both the React dev server (port 3000) and the Node server (port 9999 by default) simultaneously:

npm run dev

The React app is served at http://localhost:3000. The Express backend is served at http://localhost:9999.

5. Usage

QuickTalk Mode:

In the frontend, select a file (PDF/CSV/XLSX) to upload or load a demo. Ask questions in the chat UI. The server does a vector-based RAG retrieval and returns an answer.

ProRAG Mode:

  • Currently tailored for architects, landscape architects, and urban designers to handle structured “project context – project strategy” data.
  • A future mode for handling large design codes and local planning plan PDFs is under development.
  1. Upload a spreadsheet, map columns to “dependency”, “strategy”, “reference.”
  2. Build the store, then fill in your project context.
  3. Click “RAG Query” or “RAG Query with CoT” to get an LLM-generated answer referencing your table data.
  4. (Optionally) view or build a graph showing how strategies connect to references, dependencies, or frameworks.

6. Build for Production

If you want to bundle the React app for production:

npm run build

This creates a build folder with a production-optimized version of your React app. Typically, you’d then serve those static files using your Node server or a hosting provider.

Scripts Overview

npm run dev: Runs React + Node server concurrently for local dev. npm start: Runs the React app dev server (by default from Create React App). npm run server: Changes directory to server and runs Node/Express server with nodemon. npm run build: Builds the React frontend into a production build directory. npm test: Runs React test runner (Jest/RTL).

License

No explicit license is currently provided. You may customize or add your own if needed. For usage or distribution, check with the repository owner.

Thanks for checking out ChatRAG! If you have any questions or feedback, feel free to open an issue or contact the repository owner. Enjoy experimenting with the RAG + Graph approach!

About

chatbot interface based on RAG framework

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published