Background Having a good pre-trained language model can be significant time and money saver (better results for less compute and time) for NLP projects. Empirical studies have shown that unsupervised pre-training greatly improves generalization and training in many machine learning tasks. For example according to this paper from Montreal and Google: The results suggest that …
Continue reading “How to use Google’s pre-trained Language Model”