You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
1.The model_path I used points to the model saved after running the full fine-tuning script.
2.I have successfully used the LoRA fine-tuned model (trained with the lora_sft script and merged with merge_lora script), which allows for inference using the method provided in the qwen2vl documentation.
3.However, the model saved after full fine-tuning does not seem to support direct inference in the same way.
Others
No response
The text was updated successfully, but these errors were encountered:
thank u for your reply. However, i still want to know Why can't the full fine-tuned model be used in the same way as the original model or the model that has been fine-tuned with LoRA and merged?
Reminder
System Info
llamafactory
version: 0.9.2.dev0Reproduction
I have fine-tuned the qwen2vl model using the command:
After saving the model in the "saves" directory, I attempted to perform batch inference using the provided script:
However, I encountered the following error:
1.The model_path I used points to the model saved after running the full fine-tuning script.
2.I have successfully used the LoRA fine-tuned model (trained with the lora_sft script and merged with merge_lora script), which allows for inference using the method provided in the qwen2vl documentation.
3.However, the model saved after full fine-tuning does not seem to support direct inference in the same way.
Others
No response
The text was updated successfully, but these errors were encountered: