From 64741f9f381f34d4d3bac1d4e5ce2ca2b2fc9175 Mon Sep 17 00:00:00 2001 From: Holger Roth Date: Tue, 20 Feb 2024 14:55:59 -0500 Subject: [PATCH] fix link --- integration/nemo/examples/README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/integration/nemo/examples/README.md b/integration/nemo/examples/README.md index 7551091184..2e5bd21c2a 100644 --- a/integration/nemo/examples/README.md +++ b/integration/nemo/examples/README.md @@ -5,7 +5,7 @@ In this example, we utilize NeMo's [PEFT](https://docs.nvidia.com/deeplearning/n methods to showcase how to adapt a large language model (LLM) to a downstream task, such as financial sentiment predictions. -### [Supervised fine-tuning (SFT) with NeMo and NVFlare](./prompt_learning/README.md) +### [Supervised fine-tuning (SFT) with NeMo and NVFlare](./supervised_fine_tuning/README.md) An example of using [NVIDIA FLARE](https://nvflare.readthedocs.io/en/main/index.html) with NeMo for [supervised fine-tuning (SFT)](https://github.com/NVIDIA/NeMo-Megatron-Launcher#5152-sft-training) to fine-tune all parameters of a large language model (LLM) on supervised data to teach the model how to follow user specified instructions.