It doesn’t take much to make machine-learning algorithms go awry
The rise of large-language models could make the problem worse
The algorithms that underlie modern artificial-intelligence (AI) systems need lots of data on which to train. Much of that data comes from the open web which, unfortunately, makes the AIs susceptible to a type of cyber-attack known as “data poisoning”. This means modifying or adding extraneous information to a training data set so that an algorithm learns harmful or undesirable behaviours. Like a real poison, poisoned data could go unnoticed until after the damage has been done.
This article appeared in the Science & technology section of the print edition under the headline “Digital poisons”
More from Science & technology
Cryptocurrencies are spawning a new generation of private eyes
Their tools are software, and a nose for trouble
Fine-tuned acoustic waves can knock drones out of the sky
The right sounds can also disable their cameras
Fighting the war in Ukraine on the electromagnetic spectrum
Drone operators and jammers are in a high-tech arms race
Are ice baths good for you?
They won’t hurt. Actually they might, a bit
Why carbon monoxide could appeal to the discerning doper
Professional cycling is debating whether to ban the poisonous gas
A sophisticated civilisation once flourished in the Amazon basin
How the Casarabe died out remains a mystery