Reducing the High Cost of Training NLP Models With SRU

The increasing computation time and costs of training natural language models (NLP) highlight the importance of inventing computationally efficient models that retain top modeling power with reduced or accelerated computation. A single experiment training a top-performing language model on the ‘Billion Word’ benchmark would take 384 GPU days and as much as $36,000 using AWS on-demand instances.

Originally from KDnuggets https://ift.tt/2MKc8HM

source https://365datascience.weebly.com/the-best-data-science-blog-2020/reducing-the-high-cost-of-training-nlp-models-with-sru

Published by 365Data Science

365 Data Science is an online educational career website that offers the incredible opportunity to find your way into the data science world no matter your previous knowledge and experience. We have prepared numerous courses that suit the needs of aspiring BI analysts, Data analysts and Data scientists. We at 365 Data Science are committed educators who believe that curiosity should not be hindered by inability to access good learning resources. This is why we focus all our efforts on creating high-quality educational content which anyone can access online.

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Design a site like this with WordPress.com
Get started