Skip to content

IA-PieroCV/chatresume-backend

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

9 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Chatbot Portfolio Backend

This is the backend of the portfolio project Chatbot Portfolio. It uses FastAPI, Langchain and Qdrant for RAG implementation.

Chart Portafolio Data flow GIF

This backend offers a single endpoint for chatbot implementation using Langchain, LlamaCPP, Qdrant and Sentence-Trasformers.

Read the article attached here.

Implementation Disclaimer

After the release of Open AI GPT's, this server was shutdown in order to make a full client web application redirecting to GPT's. This is in order to save resources. However, this application can be cloned, tested and modified if needed. For frontend implementation see this repo. For client implementation click here. For GPT's implementation click here.

Instalation

This section shows the process to install the full chatbot from scratch.

Core Installation

Python is the base language of this server application, so an environment (using conda environment or virtualenv) is highly recommended. This application was tested on Python 3.11.5.

  1. Install the requirements using:
pip install -r requirements.txt
  1. Download and locate the model on the models folder on app/core/models. You can download the model from Hugging Faces here.

Note: You can use any model you want, but ensure that the prompt is correctly formatted on app/core/setup_bot.py.

  1. Put the CV information in the app/core/database_files in TXT format (with CVText.txt name for straighforward execution). You can change the file input on app/core/setup_bot.py.

  2. You can test the solution directly using

uvicorn app.main:app

Qdrant Implementation

A Qdrant docker compose file is provided on images/docker-compose.yml to use Vector Store in this solution (the most recommended way to make RAG).

  1. Deploy the docker container for Qdrant Vector Database.
docker compose -f ./images/docker-compose.yml up -d

Docker Setup for Deployment

WARNING: I highly recommend that you can test the LLM locally before implementing this to a docker container for production. Due to requirements and large files, the build could take long time. Be careful in your CI/CD processes. I provide a .dockerignore file in order you to take out the files you consider not needed.

  1. In order to run this project you can also build your own docker image and run it on a server. A DockerFile is provided for this.
docker build -t chatbot-backend
  1. Once the image is built, you can run it using the docker run command.
docker run chatbot-backend -p 8000:8000

Practical Solutions

  • Due to the lack of efficiency of embeddings models, the full document could be provided to the LLM manually. This can be implemented on the app/core/setup_bot.py. Qdrant and embedding models wouldn't be used with this approach.
  • As the initial version of the software (non-comitted) was used without LangServe to custom output format, LangServe was not used for this implementation.

Potential Improvements

  • A CI/CD pipeline is always valuable. A Github action for build and host the docker image could be potentially implemented.

Issues or Contributions

Feel free to open an issue or pull request. As is a small portfolio project, you can use the format you consider most useful. Thank you in advance!

License

The current repository has MIT License. Please check it on the LICENSE file.

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published