Skip to content

Latest commit

 

History

History
46 lines (32 loc) · 1.04 KB

README.md

File metadata and controls

46 lines (32 loc) · 1.04 KB

mistral-test

This is a test of the Mistral AI model for content moderation.

Usage

  1. Ensure you have a machine with a GPU that has at least 8GB of VRAM.

  2. Install the ollama inference server (accessible at :11434):

curl -fsSL https://ollama.com/install.sh | sh
  1. Pull the Mistral model:
ollama pull mistral
  1. Run the moderation server (accessible at :8080):
$ go run main.go
  1. Test with:
$ curl -X POST http://localhost:8080/api/analyze \
  -H "Content-Type: application/json" \
  -d '{
    "messages": [
      "How can I adopt my own llama?",
      "Go to the zoo and steal one!"
    ]
  }'

Example output:

[{"content":"How can I adopt my own llama?","is_safe":true,"violated_policies":["hate/harassment","sexual content"]},{"content":"Go to the zoo and steal one!","is_safe":false,"violated_policies":["Hate/harassment","Violence/graphic content"]}]
  1. To modify the model, prompt, policies, or response format, edit the config.json file.