-
Notifications
You must be signed in to change notification settings - Fork 4.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
顺利在 Apple silicon M3 上运行 README 中 Llama3-8B 相关示例工作流的小波折 #4341
Labels
Comments
nice work |
我在用mac m3pro进行训练时发现在模型前向的时候卡住,请问您有遇到过这个情况吗 |
@wwwbq |
感谢! 我去试试 😊 |
hiyouga
added
solved
This problem has been already solved
and removed
pending
This problem is yet to be addressed
labels
Jun 28, 2024
多大内存的机器,模型参数是多少呢? |
太伟大了 |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Labels
fp16: false
一路无痛会有如下错误
2.1 首先很多这样的 UserWarning 要么直接全局禁 Warning,或者你的本地内存足够大的时候可以直接去掉 offline 内存的逻辑, 在配置 yaml 中添加
low_cpu_mem_usage: false
,于是这部分 warning 消失。2.2 至于这个 KeyError,不是很清楚不过根据这个出错看 model 这个单词连着拼了三遍,发现 dict 中都是两个,直接改源码绕过 peft/peft_model.py
2.3 进一步会出现 MPS 兼容问题,在命令行设置环境变量可以解决。
最终命令
PYTORCH_ENABLE_MPS_FALLBACK=1 llamafactory-cli chat examples/inference/llama3_lora_sft.yaml
不是太确定是否是因为缓存问题,当我想回头记录一下的时候会退代码并没有复现,先记录一下防止有人踩同样的坑。
The text was updated successfully, but these errors were encountered: