Replies: 4 comments
-
>>> erogol |
Beta Was this translation helpful? Give feedback.
-
>>> shad94 |
Beta Was this translation helpful? Give feedback.
-
>>> shad94 |
Beta Was this translation helpful? Give feedback.
-
>>> shad94 |
Beta Was this translation helpful? Give feedback.
-
>>> shad94
[January 15, 2020, 5:33pm]
Hi, maybe someone had the same issue with phonemes creation. slash
Phonemes are generated during the 1st epoch. I am trying training with
bigger dataset ( slash
20h), however, training is suspended during the 1st
TTS.cdx tts.commands tts-emails.txt TTS.pages tts-telegram.txt TTS.warc.gz slash *kwargs).stdout slashepoch:
> ! Run is removed from
> /home/marta/Desktop/inz/test/TTS/./results/ljspeech-January-15-2020_05+52PM-7eb291c slash
> Traceback (most recent call last): slash
> File '/home/marta/Desktop/inz/test/TTS/datasets/TTSDataset.py', line
> 94, in slash _load_or_generate_phoneme_sequence slash
> phonemes = np.load(cache_path) slash
> File
> '/home/marta/Desktop/Desktop/inż/TTS/.eggs/numpy-1.15.4-py3.6-linux-x86_64.egg/numpy/lib/npyio.py',
> line 384, in load slash
> fid = open(file, 'rb') slash
> FileNotFoundError: slash [Errno 2 slash ] No such file or directory:
> 'mozilla_us_phonemes/lalka_2 slash _08_f000163_phoneme.npy'
>
> During handling of the above exception, another exception occurred:
>
> Traceback (most recent call last): slash
> File 'train.py', line 704, in slash
> main(args) slash
> File 'train.py', line 615, in main slash
> global_step, epoch) slash
> File 'train.py', line 100, in train slash
> for num_iter, data in enumerate(data_loader): slash
> File
> '/home/marta/.local/lib/python3.6/site-packages/torch/utils/data/dataloader.py',
> line 346, in next slash
> data = self. slash _dataset_fetcher.fetch(index) slash # may raise StopIteration slash
> File
> '/home/marta/.local/lib/python3.6/site-packages/torch/utils/data/ slash _utils/fetch.py',
> line 44, in fetch slash
> data = slash [self.dataset slash [idx slash ] for idx in possibly_batched_index slash ] slash
> File
> '/home/marta/.local/lib/python3.6/site-packages/torch/utils/data/ slash _utils/fetch.py',
> line 44, in slash
> data = slash [self.dataset slash [idx slash ] for idx in possibly_batched_index slash ] slash
> File '/home/marta/Desktop/inz/test/TTS/datasets/TTSDataset.py', line
> 164, in getitem slash
> return self.load_data(idx) slash
> File '/home/marta/Desktop/inz/test/TTS/datasets/TTSDataset.py', line
> 113, in load_data slash
> text = self. slash _load_or_generate_phoneme_sequence(wav_file, text) slash
> File '/home/marta/Desktop/inz/test/TTS/datasets/TTSDataset.py', line
> 97, in slash _load_or_generate_phoneme_sequence slash
> cache_path) slash
> File '/home/marta/Desktop/inz/test/TTS/datasets/TTSDataset.py', line
> 84, in slash _generate_and_cache_phoneme_sequence slash
> enable_eos_bos=False) slash
> File '/home/marta/Desktop/inz/test/TTS/utils/text/init.py', line
> 57, in phoneme_to_sequence slash
> to_phonemes = text2phone(clean_text, language) slash
> File '/home/marta/Desktop/inz/test/TTS/utils/text/init.py', line
> 31, in text2phone slash
> ph = phonemize(text, separator=seperator, strip=False, njobs=1,
> backend='espeak', language=language) slash
> File
> '/home/marta/.local/lib/python3.6/site-packages/phonemizer/phonemize.py',
> line 149, in phonemize slash
> logger=logger) slash
> File
> '/home/marta/.local/lib/python3.6/site-packages/phonemizer/backend/espeak.py',
> line 42, in init slash
> super(self.class, self).init(language, logger=logger) slash
> File
> '/home/marta/.local/lib/python3.6/site-packages/phonemizer/backend/base.py',
> line 43, in init slash
> 'initializing backend %s-%s', self.name(), self.version()) slash
> File
> '/home/marta/.local/lib/python3.6/site-packages/phonemizer/backend/espeak.py',
> line 104, in version slash
> long_version = cls.long_version() slash
> File
> '/home/marta/.local/lib/python3.6/site-packages/phonemizer/backend/espeak.py',
> line 92, in long_version slash
> '{} slash --help'.format(cls.espeak_exe()), posix=False)).decode( slash
> File '/usr/lib/python3.6/subprocess.py', line 356, in check_output slash
> slash curl-run-all.sh discourse.mozilla.org html-to-markdown.sh ordered-posts ordered-posts
> File '/usr/lib/python3.6/subprocess.py', line 423, in run slash
> with Popen( slash *popenargs, slash curl-run-all.sh discourse.mozilla.org html-to-markdown.sh ordered-posts ordered-posts~ TTS.cdx tts.commands tts-emails.txt TTS.pages tts-telegram.txt TTS.warc.gz slash *kwargs) as process: slash
> File '/usr/lib/python3.6/subprocess.py', line 729, in init slash
> restore_signals, start_new_session) slash
> File '/usr/lib/python3.6/subprocess.py', line 1295, in
> slash _execute_child slash
> restore_signals, start_new_session, preexec_fn) slash
> OSError: slash [Errno 12 slash ] Cannot allocate memory
I checked the dataset with the notebook and everything was fine. slash
I checked the file with phonemes generated and some that should be
generated, are not. I am also attaching
config.json
file
[This is an archived TTS discussion thread from discourse.mozilla.org/t/error-in-phonemes-production]
Beta Was this translation helpful? Give feedback.
All reactions