-
Notifications
You must be signed in to change notification settings - Fork 11
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Would you kindly update Xlora to support Quantized Models? #24
Comments
@Abdullah-kwl , could you please paste the result of printing |
PeftModelForCausalLM( |
I have tested your updated code #25 currently quantized model are trained using xlora , it start working with quantized model but facing issue when I want to make inference with trained quantized xlora model. facing error RecursionError: maximum recursion depth exceeded while calling a Python object |
You can review my notebook at : https://colab.research.google.com/drive/1_B1ualsMbRfYWy0gdjdMi9RSDU-qmPHf#scrollTo=I4UZaqDAnnB6 |
Thank you. I plan on working on this later today. |
Also, Checkout this notebook : https://colab.research.google.com/drive/1Eyh-mBd0LpcJwyzBHjGKhwNLQ9R74eLl?usp=drive_open Verify that a few lines are being repeated in the output. |
What adjustments should we make if we wish to upgrade XLora for IA^3? |
@Abdullah-kwl, we have begun work here and it will be completed shortly. |
Hi @EricLBuehler , Just wanted to make sure that the current version supports Quantised models since I think some tests haven't been passed here, and the commit hasn't been merged to the main branch. |
to train xlora on free collab we need to load a quantized model but currently, xlora does not support the quantized model and layers are not swapping.
Please upgrade xlora for the quantized model, mostly uses BitsAndBytesConfig to load the model in 4-bit or 8bit in free collab, But the quantized model could not convert into xlora so please update xlora for quantized models.
The text was updated successfully, but these errors were encountered: