Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

请问现在支持vl模型lora微调后直接使用adapter进行推理吗? #6993

Closed
1 task done
Road2Redemption opened this issue Feb 19, 2025 · 3 comments
Closed
1 task done
Labels
solved This problem has been already solved

Comments

@Road2Redemption
Copy link

Reminder

  • I have read the above rules and searched the existing issues.

System Info

lastest

Reproduction

官方文档中LoRA合并中说,要先对LoRA adapter进行merge。请问现在对于vl模型(qwen2vl)来说,是否支持不merge adapter直接和base model一起进行推理呢?

Others

No response

@Road2Redemption Road2Redemption added bug Something isn't working pending This problem is yet to be addressed labels Feb 19, 2025
@hiyouga
Copy link
Owner

hiyouga commented Feb 19, 2025

支持

@hiyouga hiyouga closed this as completed Feb 19, 2025
@hiyouga hiyouga added solved This problem has been already solved and removed bug Something isn't working pending This problem is yet to be addressed labels Feb 19, 2025
@Road2Redemption
Copy link
Author

@hiyouga 请问要如何使用呢?有没有reference doc?感谢🙏

@hiyouga
Copy link
Owner

hiyouga commented Feb 20, 2025

adapter_name_or_path: path_to_lora

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
solved This problem has been already solved
Projects
None yet
Development

No branches or pull requests

2 participants