Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

在使用glm-large-chinese微调分类任务时报错 #194

Open
mechigonft opened this issue Nov 9, 2023 · 0 comments
Open

在使用glm-large-chinese微调分类任务时报错 #194

mechigonft opened this issue Nov 9, 2023 · 0 comments

Comments

@mechigonft
Copy link

raceback (most recent call last):
File "/ossfs/workspace/vector/model/GLM-main/finetune_glm.py", line 470, in
main(args)
File "/ossfs/workspace/vector/model/GLM-main/tasks/superglue/finetune.py", line 119, in main
finetune(args, train_valid_datasets_provider, model_kwargs,
File "/ossfs/workspace/vector/model/GLM-main/finetune_glm.py", line 287, in finetune
tokenizer = prepare_tokenizer(args)
File "/ossfs/workspace/vector/model/GLM-main/configure_data.py", line 124, in prepare_tokenizer
tokenizer = make_tokenizer(args.tokenizer_type, None, args.tokenizer_path, args.vocab_size,
File "/ossfs/workspace/vector/model/GLM-main/data_utils/tokenization.py", line 50, in make_tokenizer
return ChineseSPTokenizer(fix_command_token=fix_command_token, **kwargs)
File "/ossfs/workspace/vector/model/GLM-main/data_utils/tokenization.py", line 1140, in init
self.text_tokenizer = sp_tokenizer.from_pretrained()
File "/ossfs/workspace/vector/model/GLM-main/data_utils/sp_tokenizer.py", line 150, in from_pretrained
return get_encoder(PRETRAINED_MODEL_FILE, "")
File "/ossfs/workspace/vector/model/GLM-main/data_utils/sp_tokenizer.py", line 136, in get_encoder
return Encoder_SP(encoder_file)
File "/ossfs/workspace/vector/model/GLM-main/data_utils/sp_tokenizer.py", line 101, in init
self.sp.Load(model_path)
File "/opt/conda/lib/python3.8/site-packages/sentencepiece/init.py", line 905, in Load
return self.LoadFromFile(model_file)
File "/opt/conda/lib/python3.8/site-packages/sentencepiece/init.py", line 310, in LoadFromFile
return _sentencepiece.SentencePieceProcessor_LoadFromFile(self, arg)
OSError: Not found: "chinese_sentencepiece/cog-pretrain.model": No such file or directory Error #2

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant