You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi, I have a set of p4 (A100) instances available through Sagemaker training jobs. I would like to finetune StarCoder on a function summarization task. Would I be able to use the HuggingFace "Train" SageMaker interface and the Transformers library to run a fine-tuning job? Or would I have to use the fine-tuning script in the GitHub library and adapt it to run on SageMaker?
Do the SageMaker scripts take the PEFT/LoRA finetune into account, or is that only for the provided finetuning script?
Also, do the HuggingFace "Train" SageMaker interface scripts work for StarEnCoder as well?
The text was updated successfully, but these errors were encountered:
Hi, I have a set of p4 (A100) instances available through Sagemaker training jobs. I would like to finetune StarCoder on a function summarization task. Would I be able to use the HuggingFace "Train" SageMaker interface and the Transformers library to run a fine-tuning job? Or would I have to use the fine-tuning script in the GitHub library and adapt it to run on SageMaker?
Do the SageMaker scripts take the PEFT/LoRA finetune into account, or is that only for the provided finetuning script?
Also, do the HuggingFace "Train" SageMaker interface scripts work for StarEnCoder as well?
The text was updated successfully, but these errors were encountered: