You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Why can't the fully fine-tuned model be used in the same way as the original model or the model that has been fine-tuned with LoRA and merged?
1.I have successfully used the LoRA fine-tuned model (trained with the lora_sft script and merged with merge_lora script), which allows for inference using the method provided in the qwen2vl documentation.
2.However, the model saved after full fine-tuning does not seem to support direct inference in the same way.
Others
No response
The text was updated successfully, but these errors were encountered:
Reminder
System Info
llamafactory version: 0.9.2.dev0
Python version: 3.8.20
PyTorch version: 2.4.1+cu121 (GPU)
Transformers version: 4.46.1
Datasets version: 3.1.0
Accelerate version: 1.0.1
PEFT version: 0.12.0
TRL version: 0.9.6
Reproduction
Why can't the fully fine-tuned model be used in the same way as the original model or the model that has been fine-tuned with LoRA and merged?
1.I have successfully used the LoRA fine-tuned model (trained with the lora_sft script and merged with merge_lora script), which allows for inference using the method provided in the qwen2vl documentation.
2.However, the model saved after full fine-tuning does not seem to support direct inference in the same way.
Others
No response
The text was updated successfully, but these errors were encountered: