Graphic detail | Bias in, bias out

Demographic skews in training data create algorithmic errors

Women and people of colour are underrepresented and depicted with stereotypes

ALGORITHMIC BIAS is often described as a thorny technical problem. Machine-learning models can respond to almost any pattern—including ones that reflect discrimination. Their designers can explicitly prevent such tools from consuming certain types of information, such as race or sex. Nonetheless, the use of related variables, like someone’s address, can still cause models to perpetuate disadvantage.

This article appeared in the Graphic detail section of the print edition under the headline “Bias in, bias out”

The new geopolitics of big business

From the June 5th 2021 edition

Discover stories from this section and more in the list of contents

Explore the edition

More from Graphic detail

What Israel and Hamas can learn from past ceasefires

Evidence from more than 2,000 truces reveals their effectiveness   

How covid contributed to a crisis of trust in America

Eight charts show how people’s confidence in the government and science has changed 


Which parts of the world are becoming more prone to wildfires?

Two maps explain why fire seasons are lasting longer and becoming more dangerous


A short history of Syria, in maps

The most influential people, groups and events that shaped Syria’s role in the Middle East

Is Javier Milei’s economic gamble working?

Inflation has plunged in Argentina, but some vital goods have soared in price

How to make sense of 2024’s wild temperatures

Our climate team highlight four charts and two maps