If you write things—news articles from standard press releases or reportage from a war zone, code, poetry, communiques, fiction—if you write things, then published results from researchers testing out OpenAI’s most recent language model, GPT-3 (Generative Pre-trained Transformer 3), may have rattled you. Or it may not have.
Think of GPT-3 as the autopredict capability of your phone or computer, but on steroids. The model is trained with 499 billion tokens (a unit of text or punctuation in natural language processing) and has 175 billion parameters (learned patterns from the training dataset).
It’s been fed all of the Internet and then some more: 45TB of text training compared to its predecessor, GPT-2’s 40GB. The wealth of information it has to pull from is gigantic and this is why it is able to, with some prompting, produce a diverse range of material from poems to news articles to lines of code with astonishing coherence. Read more via Techabal