Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Reconsider pre-training and post-training phases for the Training Runtimes #2430

Open
andreyvelich opened this issue Feb 11, 2025 · 0 comments

Comments

@andreyvelich
Copy link
Member

What you would like to be added?

Currently, we categorize our runtimes as either pre-training or post-training.
The original intent was to place LLM Fine-Tuning blueprints under post-training runtimes while reserving pre-training for runtimes without an initializer or a pre-configured LLM trainer.

However, this distinction may be confusing for users since pre-training runtimes can also be used for Fine-Tuning—the only difference being that users need to create a training function with a Fine-Tuning script manually.

To improve clarity, we should explore a better way to differentiate these two types of Training Runtimes.

cc @kubeflow/wg-training-leads @Electronic-Waste @astefanutti @shravan-achar @akshaychitneni @saileshd1402 @deepanker13

Why is this needed?

We need to make it clear how to select Training Runtime.

Love this feature?

Give it a 👍 We prioritize the features with most 👍

@andreyvelich andreyvelich changed the title [SDK] Reconsider pre-training and post-training phases for the Training Runtimes Reconsider pre-training and post-training phases for the Training Runtimes Feb 11, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

1 participant