Skip to content

Commit

Permalink
Update README.md
Browse files Browse the repository at this point in the history
replaced former model `gpt2` with the existing one in the demo - `falcon-7b`
  • Loading branch information
yonishelach authored Aug 9, 2023
1 parent 52809a1 commit 2624905
Showing 1 changed file with 3 additions and 3 deletions.
6 changes: 3 additions & 3 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@

<img src="./images/hf-ds-mlrun.png" alt="huggingface-mlrun" style="width: 500px"/>

This demo demonstrates how to fine tune a LLM and build an ML application: the **MLOps master bot**! We'll train [`gpt2-medium`](https://huggingface.co/gpt2) on [**Iguazio**'s MLOps blogs](https://www.iguazio.com/blog/) and cover how easy it is to take a model and code from development to production. Even if its a big scary LLM model, MLRun will take care of the dirty work!
This demo demonstrates how to fine tune a LLM and build an ML application: the **MLOps master bot**! We'll train [`falcon-7b`](https://huggingface.co/tiiuae/falcon-7b) on [**Iguazio**'s MLOps blogs](https://www.iguazio.com/blog/) and cover how easy it is to take a model and code from development to production. Even if its a big scary LLM model, MLRun will take care of the dirty work!

We will use:
* [**HuggingFace**](https://huggingface.co/) - as the main machine learning framework to get the model and tokenizer.
Expand All @@ -11,7 +11,7 @@ We will use:

The demo contains a single [notebook](./tutorial.ipynb) that covers the two main stages in every MLOps project:

* **Training Pipeline Automation** - Demonstrating how to get an existing model (`GPT2-Medium`) from HuggingFace's Transformers package and operationalize it through all of its life cycle phases: data collection, data ppreparation, training and evaluation, as a fully automated pipeline.
* **Training Pipeline Automation** - Demonstrating how to get an existing model (`falcon-7b`) from HuggingFace's Transformers package and operationalize it through all of its life cycle phases: data collection, data ppreparation, training and evaluation, as a fully automated pipeline.
* **Application Serving Pipeline** - Showing how to productize the newly trained LLM as a serverless function.

You can find all the python source code under [/src](./src)
Expand Down Expand Up @@ -64,4 +64,4 @@ Your environment should include `MLRUN_ENV_FILE=<absolute path to the ./mlrun.en
in this repo), see [mlrun client setup](https://docs.mlrun.org/en/latest/install/remote.html) instructions for details.

> Note: You can also use a remote MLRun service (over Kubernetes), instead of starting a local mlrun,
> edit the [mlrun.env](./mlrun.env) and specify its address and credentials
> edit the [mlrun.env](./mlrun.env) and specify its address and credentials

0 comments on commit 2624905

Please sign in to comment.