You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This is a very interesting question.
Some potential avenues of research:
Optimize for models with lowest training accuracy that still generate coherent and phonetically sound words. The soundness of the words could be assessed with some kind of NLP parser. However, this raises the issue that for instance (cool new) slang words may not always adhere to (Dutch) language norms in terms of phonetics.
Under the assumption that the model only gets better on the train data when you train for longer, non-converged models are one way to still get a certain level of 'babbling' behaviour. (term stolen from https://arxiv.org/pdf/2010.04637.pdf).
One easy way would be to check every (few) epochs if the model is memorizing too many words.
So, simply sample a large amount of words (with various temperatures) from a model, n% of these words will be memorized from train data.
We would have to experimentally find a good value for n. The n% can also be used a sort of proxy for full word accuracy, but we can also have a threshold level of accuracy (at character level).
Edit: A quick look at n for an upcoming model of plaatsnamen :)
More of a philosophical question.
bvbvmcnv
)Any papers/blog posts on this are welcome!
The text was updated successfully, but these errors were encountered: