Skip to content

Commit

Permalink
update notebook link
Browse files Browse the repository at this point in the history
  • Loading branch information
holgerroth committed Feb 26, 2024
1 parent 10c4eb9 commit 714b3ca
Show file tree
Hide file tree
Showing 2 changed files with 2 additions and 2 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@
"The prompt learning technique shown in the example is [p-tuning](https://arxiv.org/abs/2103.10385), which adds a small prompt encoder network to the LLM\n",
"to produce virtual token embeddings that guide the model toward the desired output of the downstream task.\n",
"\n",
"For more details on how to change hyperparameters for prompt learning in NeMo, see this [tutorial](https://github.com/NVIDIA/NeMo/blob/main/tutorials/nlp/Multitask_Prompt_and_PTuning.ipynb) which is also the basis for this NVFlare tutorial.\n",
"For more details on how to change hyperparameters for prompt learning in NeMo, see this [tutorial](https://github.com/NVIDIA/NeMo/blob/v1.22.0/tutorials/nlp/Multitask_Prompt_and_PTuning.ipynb) which is also the basis for this NVFlare tutorial.\n",
"\n",
"<img src=\"./figs/p-tuning.svg\" width=\"50%\" height=\"50%\">\n",
"\n",
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ a downstream task such as financial sentiment predictions.
The prompt learning technique shown in the example is p-tuning which adds a small prompt encoder network to the LLM
to produce virtual tokens that guide the model toward the desired output of the downstream task.

For more details on how to change hyperparameters for prompt learning in NeMo, see this [tutorial](https://github.com/NVIDIA/NeMo/blob/main/tutorials/nlp/Multitask_Prompt_and_PTuning.ipynb) which is also the basis for this NVFlare tutorial.
For more details on how to change hyperparameters for prompt learning in NeMo, see this [tutorial](https://github.com/NVIDIA/NeMo/blob/v1.22.0/tutorials/nlp/Multitask_Prompt_and_PTuning.ipynb) which is also the basis for this NVFlare tutorial.

## Dependencies
This example running a 20B GPT model requires more computational resources.
Expand Down

0 comments on commit 714b3ca

Please sign in to comment.