Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Batch Inference Error for qwen2vl Model After Full Fine-Tuning #6563

Closed
1 task done
Bravo5542 opened this issue Jan 8, 2025 · 0 comments
Closed
1 task done

Batch Inference Error for qwen2vl Model After Full Fine-Tuning #6563

Bravo5542 opened this issue Jan 8, 2025 · 0 comments
Labels
invalid This doesn't seem right

Comments

@Bravo5542
Copy link

Bravo5542 commented Jan 8, 2025

Reminder

  • I have read the README and searched the existing issues.

System Info

llamafactory version: 0.9.2.dev0
Python version: 3.8.20
PyTorch version: 2.4.1+cu121 (GPU)
Transformers version: 4.46.1
Datasets version: 3.1.0
Accelerate version: 1.0.1
PEFT version: 0.12.0
TRL version: 0.9.6

Reproduction

Why can't the fully fine-tuned model be used in the same way as the original model or the model that has been fine-tuned with LoRA and merged?

1.I have successfully used the LoRA fine-tuned model (trained with the lora_sft script and merged with merge_lora script), which allows for inference using the method provided in the qwen2vl documentation.
2.However, the model saved after full fine-tuning does not seem to support direct inference in the same way.

Others

No response

@github-actions github-actions bot added the pending This problem is yet to be addressed label Jan 8, 2025
@hiyouga hiyouga added invalid This doesn't seem right and removed pending This problem is yet to be addressed labels Jan 8, 2025
@hiyouga hiyouga closed this as completed Jan 8, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
invalid This doesn't seem right
Projects
None yet
Development

No branches or pull requests

2 participants