-
Notifications
You must be signed in to change notification settings - Fork 324
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
t2t利用paddlelite进行pto可以导出定点模型,但评测时报错 #214
Comments
Thanks for the issue, I'm fixing this problem and will update once finished. |
Hello @GtxMiracle, I have fixed the code and now in my env the model after quant can be load and run using paddleslim eval.py. Please check the new commit here and newly added documents here for details. |
您好,使用了您提供的最新的模型文件和导出脚本,量化后的模型测试时仍然报错: |
@GtxMiracle 模型导出方式是否按照上面提到的文档提供的方式进行导出? 转静态图过程中有无报错? batchsize等是否设置正确? |
请提供更多的信息,方便我们判断问题在哪。 |
Traceback (most recent call last):
File "eval.py", line 120, in
main()
File "eval.py", line 116, in main
eval(args)
File "eval.py", line 80, in eval
fetch_list=fetch_targets)
File "/Users/gtx/opt/anaconda3/envs/paddle/lib/python3.6/site-packages/paddle/fluid/executor.py", line 1262, in run
six.reraise(*sys.exc_info())
File "/Users/gtx/opt/anaconda3/envs/paddle/lib/python3.6/site-packages/six.py", line 719, in reraise
raise value
File "/Users/gtx/opt/anaconda3/envs/paddle/lib/python3.6/site-packages/paddle/fluid/executor.py", line 1260, in run
return_merged=return_merged)
File "/Users/gtx/opt/anaconda3/envs/paddle/lib/python3.6/site-packages/paddle/fluid/executor.py", line 1402, in _run_impl
use_program_cache=use_program_cache)
File "/Users/gtx/opt/anaconda3/envs/paddle/lib/python3.6/site-packages/paddle/fluid/executor.py", line 1492, in _run_program
[fetch_var_name])
RuntimeError: (PreconditionNotMet) The number of first scale values must be the same with corresponding dimension value of Input(X) when the
Scales
has two elements, but 64 != 3136 here.[Hint: Expected scales[0]->numel() == in->dims()[x_num_col_dims], but received scales[0]->numel():64 != in->dims()[x_num_col_dims]:32.] (at /home/Paddle/paddle/fluid/operators/fake_dequantize_op.h:90)
[operator < fake_channel_wise_dequantize_max_abs > error]
浮点inference模型使用如下方式导出:
from config import get_config
from t2t_vit import build_t2t_vit as build_model
import paddle
from paddle.static import InputSpec
config files in ./configs/
config = get_config('./configs/t2t_vit_7.yaml')
build model
model = build_model(config)
load pretrained weights
model_state_dict = paddle.load('./t2t_vit_7.pdparams')
model.set_state_dict(model_state_dict)
model.eval()
net = paddle.jit.to_static(model)
data = paddle.rand((1,3, 224, 224))
out = net(data)
x_spec = InputSpec(shape=[None, 3, 224, 224], dtype='float32', name='x')
model = paddle.jit.save(model, './t2t', input_spec=[x_spec])
量化和评测均使用官方提供demo:https://github.com/PaddlePaddle/PaddleSlim/tree/develop/demo/quant/quant_post
The text was updated successfully, but these errors were encountered: