Regulators are focusing on real AI risks over theoretical ones. Good
Rules on safety may one day be needed. But not yet
“I’m sorry Dave, I’m afraid I can’t do that.” HAL 9000, the murderous computer in “2001: A Space Odyssey” is one of many examples in science fiction of an artificial intelligence (AI) that outwits its human creators with deadly consequences. Recent progress in AI, notably the release of ChatGPT, has pushed the question of “existential risk” up the international agenda. In March 2023 a host of tech luminaries, including Elon Musk, called for a pause of at least six months in the development of AI over safety concerns. At an AI-safety summit in Britain last autumn, politicians and boffins discussed how best to regulate this potentially dangerous technology.
Explore more
This article appeared in the Leaders section of the print edition under the headline “Reality check ”
More from Leaders
How to improve clinical trials
Involving more participants can lead to new medical insights
Houthi Inc: the pirates who weaponised globalisation
Their Red Sea protection racket is a disturbing glimpse into an anarchic world
Donald Trump will upend 80 years of American foreign policy
A superpower’s approach to the world is about to be turned on its head
Rising bond yields should spur governments to go for growth
The bond sell-off may partly reflect America’s productivity boom
Much of the damage from the LA fires could have been averted
The lesson of the tragedy is that better incentives will keep people safe
Health warnings about alcohol give only half the story
Enjoyment matters as well as risk