This repo is forked from tloen/alpaca-lora, see the original README.md. We add some Starwhale support for this repo, users could manage lifecycle of the model/dataset by starwhale, including:
- finetune a new version of model locally or remotelly and get a finetuned version of model
- serve an API locally or remotelly
- evaluate the model with Starwhale datasets
python build_swds.py
python build_swmp.py
swcli model run -u llama-7b-hf/version/latest -d test/version/latest -h swmp_handlers:fine_tune
swcli model serve -u llama-7b-hf/version/latest