We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
llamafactory
model_name_or_path: F:/LM/model_zoo/qwen2.5-3B-instruct
stage: sft do_train: true finetuning_type: lora lora_target: all lora_rank: 16 lora_alpha: 16 lora_dropout: 0.05
dataset: qwen_train_data template: qwen cutoff_len: 3072 overwrite_cache: true preprocessing_num_workers: 16
output_dir: saves/qwen2.5-3b/lora/sft logging_steps: 100 save_steps: 100 plot_loss: true overwrite_output_dir: true
resume_from_checkpoint: F:/LM/Qwen2/LLaMA-Factory-main/LLaMA-Factory-main/saves/qwen2.5-3b/lora/sft/checkpoint-1000 per_device_train_batch_size: 1 gradient_accumulation_steps: 16 learning_rate: 1.0e-4 num_train_epochs: 1.0 lr_scheduler_type: cosine warmup_ratio: 0.1
fp16: true ddp_timeout: 180000000
val_size: 0.01 per_device_eval_batch_size: 1 eval_strategy: steps eval_steps: 500
sft微调Qwen 2.5 3B,可以进行训练但是显卡利用率几乎为0.请问这是正常的吗?有没有方法可以加速训练。
No response
The text was updated successfully, but these errors were encountered:
请使用 nvidia-smi 查看利用率
Sorry, something went wrong.
No branches or pull requests
Reminder
System Info
llamafactory
version: 0.9.2.dev0Reproduction
model
model_name_or_path: F:/LM/model_zoo/qwen2.5-3B-instruct
method
stage: sft
do_train: true
finetuning_type: lora
lora_target: all
lora_rank: 16
lora_alpha: 16
lora_dropout: 0.05
dataset
dataset: qwen_train_data
template: qwen
cutoff_len: 3072
overwrite_cache: true
preprocessing_num_workers: 16
output
output_dir: saves/qwen2.5-3b/lora/sft
logging_steps: 100
save_steps: 100
plot_loss: true
overwrite_output_dir: true
train
resume_from_checkpoint: F:/LM/Qwen2/LLaMA-Factory-main/LLaMA-Factory-main/saves/qwen2.5-3b/lora/sft/checkpoint-1000
per_device_train_batch_size: 1
gradient_accumulation_steps: 16
learning_rate: 1.0e-4
num_train_epochs: 1.0
lr_scheduler_type: cosine
warmup_ratio: 0.1
bf16: true
fp16: true
ddp_timeout: 180000000
eval
val_size: 0.01
per_device_eval_batch_size: 1
eval_strategy: steps
eval_steps: 500
Expected behavior
sft微调Qwen 2.5 3B,可以进行训练但是显卡利用率几乎为0.请问这是正常的吗?有没有方法可以加速训练。
Others
No response
The text was updated successfully, but these errors were encountered: