The bigger-is-better approach to AI is running out of road
If AI is to keep getting better, it will have to do more with less
When it comes to “large language models” (LLMs) such as GPT—which powers ChatGPT, a popular chatbot made by OpenAI, an American research lab—the clue is in the name. Modern AI systems are powered by vast artificial neural networks, bits of software modelled, very loosely, on biological brains. GPT-3, an LLM released in 2020, was a behemoth. It had 175bn “parameters”, as the simulated connections between those neurons are called. It was trained by having thousands of GPUs (specialised chips that excel at AI work) crunch through hundreds of billions of words of text over the course of several weeks. All that is thought to have cost at least $4.6m.
Explore more
This article appeared in the Science & technology section of the print edition under the headline “Time for a diet”
More from Science & technology
How the Gulf’s rulers want to harness the power of science
A stronger R&D base, they hope, will transform their countries’ economies. Will their plan work?
Cancer vaccines are showing promise at last
Trials are under way against skin, brain and lung tumours
New firefighting tech is being trialled in Sardinia’s ancient forests
It could sniff out blazes long before they spread out of control
Can Jeff Bezos match Elon Musk in space?
After 25 years, Blue Origin finally heads to orbit, and hopes to become a contender in the private space race
Why some doctors are reassessing hypnosis
There is growing evidence that it can help with pain, depression and more
Academic writing is getting harder to read—the humanities most of all
We analyse two centuries of scholarly work