Leaders | Artificial intelligence

Regulators are focusing on real AI risks over theoretical ones. Good

Rules on safety may one day be needed. But not yet

A man in an hazard suit holding a computer with speech bubbles showing on the screen
Illustration: Michael Haddad

“I’m sorry Dave, I’m afraid I can’t do that.” HAL 9000, the murderous computer in “2001: A Space Odyssey” is one of many examples in science fiction of an artificial intelligence (AI) that outwits its human creators with deadly consequences. Recent progress in AI, notably the release of ChatGPT, has pushed the question of “existential risk” up the international agenda. In March 2023 a host of tech luminaries, including Elon Musk, called for a pause of at least six months in the development of AI over safety concerns. At an AI-safety summit in Britain last autumn, politicians and boffins discussed how best to regulate this potentially dangerous technology.

Explore more

This article appeared in the Leaders section of the print edition under the headline “Reality check ”

From the August 24th 2024 edition

Discover stories from this section and more in the list of contents

Explore the edition

More from Leaders

Four test tubes in the shape of human figures, connected hand in hand, partially filled with a blue liquid. A dropper adds some liquid to the last figure

How to improve clinical trials

Involving more participants can lead to new medical insights

Container ship at sunrise in the Red Sea

Houthi Inc: the pirates who weaponised globalisation

Their Red Sea protection racket is a disturbing glimpse into an anarchic world


Donald Trump will upend 80 years of American foreign policy

A superpower’s approach to the world is about to be turned on its head


Rising bond yields should spur governments to go for growth

The bond sell-off may partly reflect America’s productivity boom

Much of the damage from the LA fires could have been averted

The lesson of the tragedy is that better incentives will keep people safe