Skip to content

Commit

Permalink
Add smoke test, initial serving tasks
Browse files Browse the repository at this point in the history
Signed-off-by: jphillips <[email protected]>
  • Loading branch information
fearnworks committed Oct 6, 2024
1 parent 8ed1707 commit 574e79f
Show file tree
Hide file tree
Showing 6 changed files with 71 additions and 0 deletions.
5 changes: 5 additions & 0 deletions modules/odr_caption/.env.caption.template
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
VLLM_HOST=0.0.0.0
VLLM_PORT=32000

CAPTION_API_HOST=0.0.0.0
CAPTION_API_PORT=32001
14 changes: 14 additions & 0 deletions modules/odr_caption/Taskfile.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,14 @@
version: '3'

tasks:
qwen-7b:
cmds:
- python -m vllm.entrypoints.openai.api_server --model Qwen/Qwen2-VL-7B-Instruct --host 0.0.0.0 --port 32000

transformers:
cmds:
- pip install git+https://github.com/huggingface/transformers.git --upgrade

qwen:
cmds:
- vllm serve Qwen/Qwen2-VL-7B-Instruct-GPTQ-Int8
12 changes: 12 additions & 0 deletions modules/odr_caption/requirements.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,12 @@
vllm
transformers
qwen-vl-utils[decord]
fastapi
click
openai
pydantic
termcolor
pillow
flash-attn
python-dotenv
vllm>=0.6.2
1 change: 1 addition & 0 deletions modules/odr_caption/test/NOTICE.md
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
All images in this folder should be available under open licenses.
39 changes: 39 additions & 0 deletions modules/odr_caption/test/smoke.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,39 @@
import os
import asyncio
from dotenv import load_dotenv
from odr_caption.agents.ImageCaptioner import ImageCaptioner
from odr_caption.utils.logger import logger

load_dotenv()

# Set vLLM's API server URL
vllm_api_base = "http://localhost:32000/v1"


async def main():
cwd = os.getcwd()
image_path = f"{cwd}/test/test_images/test_image_1.png"

# Initialize ImageCaptioner
captioner = ImageCaptioner(
vllm_server_url=vllm_api_base,
model_name="Qwen/Qwen2-7B-Instruct",
max_tokens=2048,
temperature=0.35,
)

# Define system message and prompt
system_message = "You are a helpful assistant."
prompt = "What is the text in the illustration?"

# Caption the image
response = await captioner.caption_image(image_path, system_message, prompt)

# Print the response
logger.info(f"Chat response: {response}")
if response.choices:
logger.info(f"Generated caption: {response.choices[0].message.content}")


if __name__ == "__main__":
asyncio.run(main())
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.

0 comments on commit 574e79f

Please sign in to comment.