You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
ile "d:\Bert_pre\GPT_2\GPT2-Chinese-old_gpt_2_chinese_before_2021_4_22\GPT2-Chinese-old_gpt_2_chinese_before_2021_4_22\tokenizations\tokenization_bert.py", line 131, in init
"model use tokenizer = BertTokenizer.from_pretrained(PRETRAINED_MODEL_NAME)".format(vocab_file))
ValueError: Can't find a vocabulary file at path 'cache/vocab_small.txt'. To load the vocabulary from a Google pretrained model use tokenizer = BertTokenizer.from_pretrained(PRETRAINED_MODEL_NAME)
这个报错是什么
The text was updated successfully, but these errors were encountered:
ile "d:\Bert_pre\GPT_2\GPT2-Chinese-old_gpt_2_chinese_before_2021_4_22\GPT2-Chinese-old_gpt_2_chinese_before_2021_4_22\tokenizations\tokenization_bert.py", line 131, in init
"model use
tokenizer = BertTokenizer.from_pretrained(PRETRAINED_MODEL_NAME)
".format(vocab_file))ValueError: Can't find a vocabulary file at path 'cache/vocab_small.txt'. To load the vocabulary from a Google pretrained model use
tokenizer = BertTokenizer.from_pretrained(PRETRAINED_MODEL_NAME)
这个报错是什么
The text was updated successfully, but these errors were encountered: