The fields of artificial intelligence and machine learning are moving so quickly that any notion of ethics is lagging decades behind, or left to works of science fiction. This might explain a new study out of Shanghai Jiao Tong University, which says computers can tell whether you will be a criminal based on nothing more than your facial features.
The bankrupt attempt to infer moral qualities from physiology was a popular pursuit for millennia, particularly among those who wanted to justify the supremacy of one racial group over another. But phrenology, which involved studying the cranium to determine someone’s character and intelligence, was debunked around the time of the Industrial Revolution, and few outside of the pseudo-scientific fringe would still claim that the shape of your mouth or size of your eyelids might predict whether you’ll become a rapist or thief.
Not so in the modern age of Artificial Intelligence, apparently: In a paper titled “Automated Inference on Criminality using Face Images,” two Shanghai Jiao Tong University researchers say they fed “facial images of 1,856 real persons” into computers and found “some discriminating structural features for predicting criminality, such as lip curvature, eye inner corner distance, and the so-called nose-mouth angle.” They conclude that “all four classifiers perform consistently well and produce evidence for the validity of automated face-induced inference on criminality, despite the historical controversy surrounding the topic.”
Though long ago rejected by the scientific community, phrenology and other forms of physiognomy have reappeared throughout dark chapters of history. A 2009 article in Pacific Standard on the racial horrors of colonial Rwanda might’ve been good background material for the pair:
In the 1920s and 1930s, the Belgians, in their role as occupying power, put together a national program to try to identify individuals’ ethnic identity through phrenology, an abortive attempt to create an ethnicity scale based on measurable physical features such as height, nose width and weight, with the hope that colonial administrators would not have to rely on identity cards.
This can’t be overstated: The authors of this paper — in 2016 — believe computers are capable of scanning images of your lips, eyes, and nose to detect future criminality. It’s enough to make phrenology seem quaint.
The study contains virtually no discussion of why there is a “historical controversy” over this kind of analysis — namely, that it was debunked hundreds of years ago. Rather, the authors trot out another discredited argument to support their main claims:, that computers can’t be racist, because they’re computers:
Unlike a human examiner/judge, a computer vision algorithm or classifier has absolutely no subjective baggages, having no emotions, no biases whatsoever due to past experience, race, religion, political doctrine, gender, age, etc., no mental fatigue, no preconditioning of a bad sleep or meal. The automated inference on criminality eliminates the variable of meta-accuracy (the competence of the human judge/examiner) all together. Besides the advantage of objectivity, sophisticated algorithms based on machine learning may discover very delicate and elusive nuances in facial characteristics and structures that correlate to innate personal traits and yet hide below the cognitive threshold of most untrained nonexperts.
This misses the fact that no computer or software is created in a vacuum. Software is designed by people, and people who set out to infer criminality from facial features are not free from inherent bias.
Absent, too, is any discussion of the incredible potential for abuse of this software by law enforcement. Kate Crawford, an AI researcher with Microsoft Research New York, MIT, and NYU, told The Intercept, “I‘d call this paper literal phrenology, it’s just using modern tools of supervised machine learning instead of calipers. It’s dangerous pseudoscience.”
Crawford cautioned that “as we move further into an era of police body cameras and predictive policing, it’s important to critically assess the problematic and unethical uses of machine learning to make spurious correlations,” adding that it’s clear the authors “know it’s ethically and scientifically problematic, but their ‘curiosity’ was more important.”
Given the explosive, excited growth of AI as a field of study and a hot commodity, don’t be surprised if this curiosity is contagious.
IT’S EVEN WORSE THAN WE THOUGHT.
What we’re seeing right now from Donald Trump is a full-on authoritarian takeover of the U.S. government.
This is not hyperbole.
Court orders are being ignored. MAGA loyalists have been put in charge of the military and federal law enforcement agencies. The Department of Government Efficiency has stripped Congress of its power of the purse. News outlets that challenge Trump have been banished or put under investigation.
Yet far too many are still covering Trump’s assault on democracy like politics as usual, with flattering headlines describing Trump as “unconventional,” “testing the boundaries,” and “aggressively flexing power.”
The Intercept has long covered authoritarian governments, billionaire oligarchs, and backsliding democracies around the world. We understand the challenge we face in Trump and the vital importance of press freedom in defending democracy.
We’re independent of corporate interests. Will you help us?
IT’S BEEN A DEVASTATING year for journalism — the worst in modern U.S. history.
We have a president with utter contempt for truth aggressively using the government’s full powers to dismantle the free press. Corporate news outlets have cowered, becoming accessories in Trump’s project to create a post-truth America. Right-wing billionaires have pounced, buying up media organizations and rebuilding the information environment to their liking.
In this most perilous moment for democracy, The Intercept is fighting back. But to do so effectively, we need to grow.
That’s where you come in. Will you help us expand our reporting capacity in time to hit the ground running in 2026?
We’re independent of corporate interests. Will you help us?
I’M BEN MUESSIG, The Intercept’s editor-in-chief. It’s been a devastating year for journalism — the worst in modern U.S. history.
We have a president with utter contempt for truth aggressively using the government’s full powers to dismantle the free press. Corporate news outlets have cowered, becoming accessories in Trump’s project to create a post-truth America. Right-wing billionaires have pounced, buying up media organizations and rebuilding the information environment to their liking.
In this most perilous moment for democracy, The Intercept is fighting back. But to do so effectively, we need to grow.
That’s where you come in. Will you help us expand our reporting capacity in time to hit the ground running in 2026?
We’re independent of corporate interests. Will you help us?
Latest Stories
Midterms 2026
Crypto Critic Maxine Waters’s New Primary Foe Got Over Two-Thirds of Money From Crypto
Maxine Waters, the scourge of crypto, could become Financial Services Committee chair if Democrats win the House in midterm elections.
Targeting Iran
Israel’s “Black Wednesday” Massacre Leaves Lebanese Families Giving DNA to ID Loved Ones’ Remains
In Lebanon, an unprecedented campaign of DNA tests is being used to identify mangled bodies left trapped under rubble by Israel’s blitz.
The Intercept Briefing
When Anti-War Candidates Become War-Monger Presidents
Matt Duss, former foreign policy adviser to Sen. Bernie Sanders, on how Democrats can win on an anti-war platform and bring about real change.