You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am trying to train a model on the SLURP dataset using the NeMo framework by following the steps outlined in the README.md file under NeMo/examples/slu/speech_intent_slot.
Data Preparation and Building Tokenizers completed successfully.
The .json files (ending with slu.json) were correctly generated under the slurp_data folder.
However, when I run the training script, I encounter the following error:
self._train_dl = self._setup_dataloader_from_config(config=train_data_config)
File "/home/ddyang/miniconda3/envs/nemo/lib/python3.10/site-packages/nemo/collections/asr/models/slu_models.py", line 420, in _setup_dataloader_from_config
return torch.utils.data.DataLoader(
File "/home/ddyang/miniconda3/envs/nemo/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 350, in __init__
sampler = RandomSampler(dataset, generator=generator) # type: ignore[arg-type]
File "/home/ddyang/miniconda3/envs/nemo/lib/python3.10/site-packages/torch/utils/data/sampler.py", line 143, in __init__
raise ValueError(f"num_samples should be a positive integer value, but got num_samples={self.num_samples}")
ValueError: num_samples should be a positive integer value, but got num_samples=0
It seems like the data was not read correctly. Could you please guide me on how to resolve this issue?
The text was updated successfully, but these errors were encountered:
I am trying to train a model on the SLURP dataset using the NeMo framework by following the steps outlined in the
README.md
file underNeMo/examples/slu/speech_intent_slot
..json
files (ending withslu.json
) were correctly generated under theslurp_data
folder.However, when I run the training script, I encounter the following error:
It seems like the data was not read correctly. Could you please guide me on how to resolve this issue?
The text was updated successfully, but these errors were encountered: