Replies: 2 comments
-
>>> erogol |
Beta Was this translation helpful? Give feedback.
-
>>> Sadam1195 |
Beta Was this translation helpful? Give feedback.
-
>>> erogol |
Beta Was this translation helpful? Give feedback.
-
>>> Sadam1195 |
Beta Was this translation helpful? Give feedback.
-
>>> Sadam1195
[June 25, 2020, 11:11am]
Hi, I started training with
> python distribute.py slash --config_path config.json slash | tee training.log
After training for around 223 epochs, it breaks and gave this out in my
training.log
> EVALUATION slash
> warning: audio amplitude out of range, auto clipped. slash
> slash | slash > Synthesizing test sentences slash
> warning: audio amplitude out of range, auto clipped. slash
> warning: audio amplitude out of range, auto clipped. slash
> warning: audio amplitude out of range, auto clipped. slash
> -- slash > EVAL PERFORMANCE slash
> slash | slash > avg_postnet_loss: 0.17037 (+0.00013) slash
> slash | slash > avg_decoder_loss: 0.25991 (-0.00214) slash
> slash | slash > avg_stopnet_loss: 0.06618 (+0.00142) slash
> slash | slash > avg_align_error: 0.35647 (-0.00567) slash
> slash | slash > avg_ga_loss: 0.01053 (-0.00001)
>
> EPOCH: 223/1000
>
> Number of output frames: 3
>
> TRAINING (2020-06-24 11:14:35) slash
> ! Run is kept in .../LJSpeech/ljspeech-June-24-2020_01+02AM-8f8ba5e slash
> slash ['train.py', '--continue_path=', '--restore_path=',
> '--config_path=config.json', '--group_id=group_2020_06_24-010230',
> '--rank=0' slash ]
I thought maybe my VM might have rebooted so I ran again with
checkpoints
> python train.py slash --config_path config.json slash --restore_path
> /workspace/LJSpeech/ljspeech-June-24-2020_01+02AM-8f8ba5e/checkpoint_50000.pth.tar
> slash | tee training.log
But after 219 epochs again my training stopped and gave this error
> EPOCH: 219/1000
>
> Number of output frames: 2
>
> TRAINING (2020-06-25 06:52:31) slash
> ! Run is kept in .../LJSpeech/ljspeech-June-24-2020_12+50PM-8f8ba5e slash
> Traceback (most recent call last): slash
> File 'train.py', line 676, in slash
> main(args) slash
> File 'train.py', line 591, in main slash
> global_step, epoch) slash
> File 'train.py', line 148, in train slash
> for num_iter, data in enumerate(data_loader): slash
> File
> '/opt/conda/lib/python3.7/site-packages/torch/utils/data/dataloader.py',
> line 279, in iter slash
> return slash _MultiProcessingDataLoaderIter(self) slash
> File
> '/opt/conda/lib/python3.7/site-packages/torch/utils/data/dataloader.py',
> line 719, in init slash
> w.start() slash
> File '/opt/conda/lib/python3.7/multiprocessing/process.py', line 112,
> in start slash
> self. slash _popen = self. slash _Popen(self) slash
> File '/opt/conda/lib/python3.7/multiprocessing/context.py', line 223,
> in slash _Popen slash
> return slash _default_context.get_context().Process. slash _Popen(process_obj) slash
> File '/opt/conda/lib/python3.7/multiprocessing/context.py', line 277,
> in slash _Popen slash
> return Popen(process_obj) slash
> File '/opt/conda/lib/python3.7/multiprocessing/popen_fork.py', line
> 20, in init slash
> self. slash _launch(process_obj) slash
> File '/opt/conda/lib/python3.7/multiprocessing/popen_fork.py', line
> 70, in slash launch slash
> self.pid = os.fork() slash
> OSError: slash [Errno 12 slash ] Cannot allocate memory
Here is the output of my config.json file
> { slash
> 'model': 'Tacotron2', slash
> 'run_name': 'ljspeech', slash
> 'run_description': 'tacotron2',
>
> // AUDIO PARAMETERS
> 'audio':{
> // stft parameters
> 'num_freq': 513, // number of stft frequency levels. Size of the linear spectogram frame.
> 'win_length': 1024, // stft window length in ms.
> 'hop_length': 256, // stft window hop-lengh in ms.
> 'frame_length_ms': null, // stft window length in ms.If null, 'win_length' is used.
> 'frame_shift_ms': null, // stft window hop-lengh in ms. If null, 'hop_length' is used.
>
> // Audio processing parameters
> 'sample_rate': 22050, // DATASET-RELATED: wav sample-rate. If different than the original data, it is resampled.
> 'preemphasis': 0.0, // pre-emphasis to reduce spec noise and make it more structured. If 0.0, no -pre-emphasis.
> 'ref_level_db': 20, // reference level db, theoretically 20db is the sound of air.
>
> // Silence trimming
> 'do_trim_silence': true,// enable trimming of slience of audio as you load it. LJspeech (false), TWEB (false), Nancy (true)
> 'trim_db': 60, // threshold for timming silence. Set this according to your dataset.
>
> // Griffin-Lim
> 'power': 1.5, // value to sharpen wav signals after GL algorithm.
> 'griffin_lim_iters': 60,// #griffin-lim iterations. 30-60 is a good range. Larger the value, slower the generation.
>
> // MelSpectrogram parameters
> 'num_mels': 80, // size of the mel spec frame.
> 'mel_fmin': 0.0, // minimum freq level for mel-spec. ~50 for male and ~95 for female voices. Tune for dataset!!
> 'mel_fmax': 8000.0, // maximum freq level for mel-spec. Tune for dataset!!
>
> // Normalization parameters
> 'signal_norm': true, // normalize spec values. Mean-Var normalization if 'stats_path' is defined otherwise range normalization defined by the other params.
> 'min_level_db': -100, // lower bound for normalization
> 'symmetric_norm': true, // move normalization to range [-1, 1]
> 'max_norm': 4.0, // scale normalization to range [-max_norm, max_norm] or [0, max_norm]
> 'clip_norm': true, // clip normalized values into the range.
> 'stats_path': null // DO NOT USE WITH MULTI_SPEAKER MODEL. scaler stats file computed by 'compute_statistics.py'. If it is defined, mean-std based notmalization is used and other normalization params are ignored
> },
>
> // VOCABULARY PARAMETERS
> // if custom character set is not defined,
> // default set in symbols.py is used
> // 'characters':{
> // 'pad': '',
> // 'eos': '~',
> // 'bos': '^',
> // 'characters': 'ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz!'(),-.:;? ',
> // 'punctuations':'!'(),-.:;? ',
> // 'phonemes':'iyɨʉɯuɪʏʊeøɘəɵɤoɛœɜɞʌɔæɐaɶɑɒᵻʘɓǀɗǃʄǂɠǁʛpbtdʈɖcɟkɡqɢʔɴŋɲɳnɱmʙrʀⱱɾɽɸβfvθðszʃʒʂʐçʝxɣχʁħʕhɦɬɮʋɹɻjɰlɭʎʟˈˌːˑʍwɥʜʢʡɕʑɺɧɚ˞ɫ'
> // },
>
> // DISTRIBUTED TRAINING
> 'distributed':{
> 'backend': 'nccl',
> 'url': 'tcp: slash / slash /localhost:54321'
> },
>
> 'reinit_layers': [], // give a list of layer names to restore from the given checkpoint. If not defined, it reloads all heuristically matching layers.
>
> // TRAINING
> 'batch_size': 32, // Batch size for training. Lower values than 32 might cause hard to learn attention. It is overwritten by 'gradual_training'.
> 'eval_batch_size':16,
> 'r': 7, // Number of decoder frames to predict per iteration. Set the initial values if gradual training is enabled.
> 'gradual_training': [[0, 7, 64], [1, 5, 64], [50000, 3, 32], [130000, 2, 32], [290000, 1, 32]], //set gradual training steps [first_step, r, batch_size]. If it is null, gradual training is disabled. For Tacotron, you might need to reduce the 'batch_size' as you proceeed.
> 'loss_masking': true, // enable / disable loss masking against the sequence padding.
> 'ga_alpha': 10.0, // weight for guided attention loss. If > 0, guided attention is enabled.
>
> // VALIDATION
> 'run_eval': true,
> 'test_delay_epochs': 10, //Until attention is aligned, testing only wastes computation time.
> 'test_sentences_file': null, // set a file to load sentences to be used for testing. If it is null then we use default english sentences.
>
> // OPTIMIZER
> 'noam_schedule': false, // use noam warmup and lr schedule.
> 'grad_clip': 1.0, // upper limit for gradients for clipping.
> 'epochs': 1000, // total number of epochs to train.
> 'lr': 0.0001, // Initial learning rate. If Noam decay is active, maximum learning rate.
> 'wd': 0.000001, // Weight decay weight.
> 'warmup_steps': 4000, // Noam decay steps to increase the learning rate from 0 to 'lr'
> 'seq_len_norm': false, // Normalize eash sample loss with its length to alleviate imbalanced datasets. Use it if your dataset is small or has skewed distribution of sequence lengths.
>
> // TACOTRON PRENET
> 'memory_size': -1, // ONLY TACOTRON - size of the memory queue used fro storing last decoder predictions for auto-regression. If < 0, memory queue is disabled and decoder only uses the last prediction frame.
> 'prenet_type': 'original', // 'original' or 'bn'.
> 'prenet_dropout': true, // enable/disable dropout at prenet.
>
> // ATTENTION
> 'attention_type': 'original', // 'original' or 'graves'
> 'attention_heads': 4, // number of attention heads (only for 'graves')
> 'attention_norm': 'sigmoid', // softmax or sigmoid. Suggested to use softmax for Tacotron2 and sigmoid for Tacotron.
> 'windowing': false, // Enables attention windowing. Used only in eval mode.
> 'use_forward_attn': false, // if it uses forward attention. In general, it aligns faster.
> 'forward_attn_mask': false, // Additional masking forcing monotonicity only in eval mode.
> 'transition_agent': false, // enable/disable transition agent of forward attention.
> 'location_attn': true, // enable_disable location sensitive attention. It is enabled for TACOTRON by default.
> 'bidirectional_decoder': false, // use https://arxiv.org/abs/1907.09006. Use it, if attention does not work well with your dataset.
>
> // STOPNET
> 'stopnet': true, // Train stopnet predicting the end of synthesis.
> 'separate_stopnet': true, // Train stopnet seperately if 'stopnet==true'. It prevents stopnet loss to influence the rest of the model. It causes a better model, but it trains SLOWER.
>
> // TENSORBOARD and LOGGING
> 'print_step': 25, // Number of steps to log traning on console.
> 'print_eval': false, // If True, it prints intermediate loss values in evalulation.
> 'save_step': 10000, // Number of training steps expected to save traninpg stats and checkpoints.
> 'checkpoint': true, // If true, it saves checkpoints per 'save_step'
> 'tb_model_param_stats': false, // true, plots param stats per layer on tensorboard. Might be memory consuming, but good for debugging.
>
> // DATA LOADING
> 'text_cleaner': 'phoneme_cleaners',
> 'enable_eos_bos_chars': false, // enable/disable beginning of sentence and end of sentence chars.
> 'num_loader_workers': 4, // number of training data loader processes. Don't set it too big. 4-8 are good values.
> 'num_val_loader_workers': 4, // number of evaluation data loader processes.
> 'batch_group_size': 0, //Number of batches to shuffle after bucketing.
> 'min_seq_len': 6, // DATASET-RELATED: minimum text length to use in training
> 'max_seq_len': 153, // DATASET-RELATED: maximum text length
>
> // PATHS
> 'output_path': '../LJSpeech/',
>
> // PHONEMES
> 'phoneme_cache_path': '../TTS/mozilla_us_phonemes_3', // phoneme computation is slow, therefore, it caches results in the given folder.
> 'use_phonemes': true, // use phonemes instead of raw characters. It is suggested for better pronounciation.
> 'phoneme_language': 'it', // depending on your target language, pick one from https://github.com/bootphon/phonemizer#languages
>
> // MULTI-SPEAKER and GST
> 'use_speaker_embedding': false, // use speaker embedding to enable multi-speaker learning.
> 'style_wav_for_test': null, // path to style wav file to be used in TacotronGST inference.
> 'use_gst': false, // TACOTRON ONLY: use global style tokens
>
> // DATASETS
> 'datasets': // List of datasets. They all merged and they get different speaker_ids.
> [
> {
> 'name': 'ljspeech',
> 'path': '../LJSpeech-1.1/',
> 'meta_file_train': 'metadata.csv',
> 'meta_file_val': null
> }
> ]
>
> }
I don't know what happened, can you suggest what is wrong here? slash
[This is an archived TTS discussion thread from discourse.mozilla.org/t/training-fails-after-223-epochs]
Beta Was this translation helpful? Give feedback.
All reactions