One day in 2020, police arrested Robert Williams in his Detroit driveway, handcuffed him in front of his children, and took him away.
He had no idea what he’d done wrong.
That night he slept on a cold cell floor using his jacket for a pillow. His wife called the Detroit detention centre repeatedly, but got no answers.
Loading…
The next day Mr Williams was told his alleged crime.
But the full story of why him — why police mistakenly thought he was the criminal — only surfaced later, after a lot of digging.
Finally, having sued the Detroit police, he learnt he was the victim of a faulty artificial intelligence (AI) facial recognition system.
And even as he fought to fully clear his name, the system continued operating.
Police used it to identify suspected criminals. It still made mistakes.
And the men and women it falsely singled out? They were all African American, like Robert.
Now, with similar technology being used in Australia, and the government introducing legislation to partly regulate its use, Mr Williams is telling his story as a cautionary tale.
“I knew that that technology had evolved and this was a thing, but I didn’t know they were just taking the technology and arresting people,” he says, speaking from his home in Detroit.
“I didn’t know the technology could make an arrest.”
A powerful tool with a hidden flaw
Robert Williams’ arrest in January 2020 was the first documented US case of a person being wrongfully detained based on facial recognition technology.
When the officers knocked on his door, police departments were in the midst of a technological revolution.
A new kind of powerful AI was driving a rollout of facial recognition in law enforcement.
It wasn’t just the US. This was happening around the world, including in Australia.
For police, the benefits were obvious. Facial recognition could analyse a blown-up still taken from a security tape, sift through a database of millions of driver licence photos, and identify the person who did the crime.
But the technology wasn’t perfect.
Apart from facilitating a system of mass surveillance that threatened people’s privacy, the new AI systems were racially biased.
They were significantly more likely to falsely identify people of colour.
Despite this documented problem, police relied increasingly heavily on AI systems for investigations.
Sucked into the criminal justice system
In 2018, a man in a baseball cap stole thousands of dollars worth of watches from a store in central Detroit.
Months later, the facial recognition system used by Detroit police combed through its database of millions of driver licences to identify the criminal in the grainy security tapes.
Mr Williams’ photo didn’t come up first.
In fact, it was the ninth-most-probable match.
But it didn’t matter.
Officers drove to Mr Williams’ house and handcuffed him. He’d never been arrested before. It felt like a movie, or a bad dream.
“And at the time we had a five- and a two-year-old,” his wife Melissa Williams says.
“I was trying to keep them somewhat shielded, and also see what was happening.”
The arresting officers didn’t know the details of the crime.
As they drove him to the detention centre, Mr Williams was sucked into the machine of criminal justice.
And once he was in the system, he’d spend years trying to get out.
“I tried to tell the guy he was making a mistake. I say, ‘Y’all got the wrong guy.'”
“And he was like, ‘Look, I’m just here doing my job.'”
Racial bias creeps into facial recognition
The reason the AI system identified the wrong guy goes back to a flaw in the way it was trained to detect faces.
Modern facial recognition uses a machine-learning method called neural networks, which recognise complex patterns in information.
Instead of being programmed with rules-based logic, they’re “trained” on data.
For instance, if they’re fed lots of photos with and without faces (where each photo is labelled to say whether it has a face or not), they learn through trial and error to identify faces within photos.
For facial recognition, the AI maps a face’s distinctive features (such as the space between nose and mouth, or the size of the eyebrows), and then converts the image data into a string of numbers, or “faceprint”, that corresponds to the colour and tone of these features.
Loading…
It then runs this unique code through its database of other images to see if there’s a close-enough match.
Simple, right? Yes, but the AI is only as good as its training data.
If it’s mostly trained on one kind of face, it’s significantly worse at accurately matching other kinds of faces.
And that’s exactly what happened with a lot of facial recognition, including the system that falsely identified Mr Williams.
The AI was trained on a database of mostly white people. White people are disproportionately represented on the internet, and therefore also in image datasets compiled by scraping photos from social media.
And this wasn’t the only problem. The photos of people of colour in the dataset were generally of worse quality, as default camera settings are often not optimised to capture darker skin tones.
As a result, the system was bad at accurately matching faces of people of colour.
Racial bias quietly crept into facial recognition.
Since most people working in AI were white, they didn’t notice.
In 2019, a US study of more than 100 facial recognition systems found they falsely identified African American faces up to 100 times more than Caucasian faces.
This study included algorithms used in the facial recognition system that picked out Robert Williams’ licence photo.
By January 2020, as Mr Williams had his mug shot taken in the Detroit detention centre, civil liberties groups knew that black people were being falsely accused due to this technology.
But they couldn’t prove it was happening, says Phil Mayor, a senior staff attorney at the American Civil Liberties Union (ACLU) of Michigan.
“All sorts of people around the country were saying this technology doesn’t work,” he says.
“We knew this was going to happen.”
‘The computer got it wrong’
Alongside his mugshot, Mr Williams had his fingerprints and DNA taken, and was held overnight.
The next day, two detectives took him to an interrogation room and placed pieces of paper face down on the table in front of him.
They explained these were blown-up security tape stills from a store that was robbed, about 30 minutes’ drive from his house.
Then they turned the photos face up, one by one.
The first photo showed a heavy-set black man in a red baseball cap standing beside a watch display.
The second was a blurry close-up.
It clearly wasn’t Robert Williams.
“I wanted to ask, ‘Do you think all black people look alike?’ Because he was a big black guy, but that don’t make it me though.”
One of the detectives then asked, “So the computer got it wrong?”
It was Mr Williams’ first clue that the arrest was based on facial recognition.
“And I’m like, ‘Yeah, the computer got it wrong.'”
Mr Williams later found out police did almost no other investigative work after getting the computer match.
If they’d asked him for an alibi, they’d have found he couldn’t have done the crime.
A video on his phone proved he was miles away at the time of the theft.
AI keeps falsely identifying black men and women
Mr Williams was released from detention that night, but his journey through the justice system was only just beginning.
His mug shot, fingerprints and DNA were still on file, and he needed a lawyer to defend against the theft charge.
He hired the ACLU’s Phil Mayor, who got the charge dismissed in court.
But Mr Williams wasn’t done. He then campaigned for Detroit police to stop using facial recognition. When they refused, he sued them for wrongful arrest. This case is ongoing.
“Detroit is one of the blackest cities in America,” Mr Mayor says.
“It’s a majority black city, and here it is investing millions of dollars of taxpayer money in using a technology that is particularly unreliable in identifying black faces.”
Police use of facial recognition is now a polarising issue in the US.
At least five more people were wrongfully arrested after being falsely identified by facial recognition systems.
They were all black men and women.
The most recent example is an eight-month-pregnant black woman in Detroit, wrongfully arrested for robbery and carjacking this year.
Detroit police didn’t respond to the ABC’s request for comment.
The problem with facial recognition isn’t just that it can be bad at identifying black faces, but the way police end up using it, Mr Mayor says.
In theory, they’re only meant to use a facial recognition match as a clue in a case.
But that doesn’t always happen, Mr Mayor says.
Police sometimes use the face match solely as grounds for an arrest.
The AI effectively decides who gets arrested.
“Here in America, the police are trying to say, don’t worry, we’re only using this technology to get a lead,” he says.
“And then we go out and we do an investigation. But the thing is, you know, shoddy technology leads to shoddy investigations.“
Facial recognition widespread in Australia, but with no legal guardrails
In Australia, various types of facial recognition are widely used, but the issue is less public than in the US.
This is partly due to a history of failed regulation attempts.
In 2015, the federal government proposed a national facial recognition system it dubbed “the capability”.
It would give law enforcement and security agencies quick access to up to 100 million facial images from databases around Australia, including driver licences and passport photos.
In 2019, it introduced legislation to govern the system’s use.
But the legislation was widely criticised as draconian and never passed parliament.
That didn’t stop the then government from ploughing ahead with its planned national facial recognition system, says Edward Santow, an expert on responsible AI at the University of Technology Sydney, and the Australian Human Rights Commissioner at the time.
The capability was rolled out without any legislation dealing specifically with how it should be used.
It was the worst possible scenario, Professor Santow says.
“The only thing worse than really bad legal guardrails is no legal guardrail. And that’s what we’ve had for the last four years.”
Could a case like Robert Williams’ happen in Australia?
As a result of the lack of rules around facial recognition in Australia, it’s unclear if a case like Robert Williams’ could happen here, Professor Santow says.
“We simply don’t know, which is the problem in itself.
Police have made broad public assurances they don’t use the national facial recognition system to compare one person’s photo against the entire national database to identify them.
This is known as “one-to-many” face matching, which is what police used to arrest Mr Williams.
Other kinds of facial recognition include “one-to-one” services used to verify documents, such as confirming a person’s face matches the photo on their passport. This kind is used millions of times per year.
But even if police are not using the national system for one-to-many facial recognition, they have been using commercial one-to-many services, such as Clearview AI, which relies on images scraped from the internet.
In late 2021, Australia’s Information Commissioner found use of Clearview broke Australia’s privacy law.
Despite this, last month Senate estimates heard the federal police tested a second commercial one-to-many face matching service, Pim Eyes, earlier this year.
Australian retailers such as Bunnings and Kmart have also used commercial one-to-many services to surveil customers.
The federal government recently introduced a bill to govern some uses of one-to-one and one-to-many facial recognition.
At the time, Attorney General Mark Dreyfus said the Identity Verification Services Bill would put strict limits on the use of one-to-many face matching.
But Professor Santow says these restrictions only apply to the national facial recognition system.
It doesn’t restrict police use of commercial one-to-many services, he says.
“It’s going to probably have zero effect on the police.”
In the US, Robert Williams is campaigning to ban the use of facial recognition by law enforcement. Having lived a quiet suburban life up to 2020, he now speaks to lawmakers around the country as they consider whether to ban or approve the technology.
Mr Williams acknowledges the bias problem may be fixed. Training systems on larger and more diverse databases appears to be helping.
But even if that happens, he’ll still oppose facial recognition for mass surveillance.
“I don’t want to be surveilled at all times, so that every red light there’s a camera looking into your car.
“I guess it makes sense for crime, but what about people who are just living life?”
Listen to the full story of Robert Williams and the rise of facial recognition, and subscribe to RN Science Friction.
Get all the latest science stories from across the ABC.