AI Facial Recognition Could Tag You as a Zombie, Raising Legal and Ethical Concerns

By Bulletin Staff

The surveillance video is shocking. In low-res black and white, we see an ambling midday crowd on a subway platform in London’s Underground. Businesspeople coming and going to meetings, shoppers lugging bags with newly acquired swag, and parents escorting children home from school.

Suddenly, the CCTV footage captures members of London’s Rapid Undead Containment Team swarming a smartly dressed middle-aged man, pushing him to the ground and holding him down while one of their number prepares to pierce the man’s skull with what the British called a “zed stick” – a spear used to put down a zombie with a single thrust through the brain.

The only problem with this incident from 2022 was that the man the RUC Team was prepared to put down was not a zombie at all. He was a well-respected, very much alive and very much human surgeon who happened to be of South Asian ancestry. Fortunately, the man’s wife and son intervened before the RUCT could dispatch him.

A subsequent parliamentary inquiry into the incident revealed that the doctor had been misidentified as a zombie by artificial intelligence (AI) facial recognition software. The responding RUC Team, under orders to contain any undead threats “with extreme prejudice,” were prepared to kill the unsuspecting surgeon without bothering to establish whether he was, in fact, a zombie.

“RUCT was provided with guidance that they could absolutely trust the AI system to identify zombies in a crowd with 100% accuracy, and they should not hesitate to use lethal force rather than risk a repeat of the kind of outbreak portrayed in the 2002 docudrama 28 Days Later,” according to the report released last week by the parliamentarians who investigated the incident.

The “near-death by AI misclassification as a zombie,” as the report called the incident, prompted widespread protests against RUCT and the broader British law enforcement community. It also prompted calls from experts in AI technology and AI ethics to pause, if not end completely, the use of the AI facial recognition software that made the misclassification in the first place.

Concerns about AI Facial Recognition

London is famously one of the most surveilled cities in the world, with widespread use of CCTV in law enforcement. The use of AI for facial recognition in policing, security and criminal justice is a relatively new phenomenon but already highly controversial.

Those in favor of the technology argue that AI-powered facial recognition enhances public safety by aiding law enforcement in quickly identifying suspects, locating missing persons and preventing crimes. They say the technology can streamline investigations, potentially speeding up processes that could otherwise take significant manpower and time.

However, opponents that include civil rights and legal watchdog groups have raised concerns about the accuracy of these systems, especially in correctly identifying individuals across different demographics. Research has shown, for example, that the algorithms that underlie facial recognition have built-in biases that cause them to misclassify people of color.

In an article in Scientific American earlier this year, the authors point to several factors that can cause facial recognition technology to worsen racial inequities in policing. “We believe this results from factors that include the lack of Black faces in the algorithms’ training data sets, a belief that these programs are infallible and a tendency of officers’ own biases to magnify these issues.”

Recognizing Zombie Faces

These arguments against the use of facial recognition in policing of humans have now found new resonance in the debate over applying the technology to detecting zombies. The program to implement zombie facial recognition in the London Underground began in 2021 as a new generation of AI software came onto the market claiming near-perfect accuracy in identifying the undead.

At the time, privacy rights advocates raised concerns about using facial recognition software in public places because the systems can collect and process data without the consent or even knowledge of those being surveilled.

But London’s law enforcement authorities waved aside those objections, asserting that the system would be used only to track the undead, which, under British law, have no right to privacy. Heightened public concerns about a rising number of zombie attacks in London during the COVID pandemic pushed city leaders to move forward with the program.

AI ethics experts and zombie rights advocates worry that AI facial recognition could misidentify a human as undead, with potentially fatal consequences.

Opponents of this novel application of the algorithms include privacy advocates, policing skeptics and their newfound allies among the zombie rights community.

These groups argue that the training data used for zombie facial recognition lacks diverse representations of the undead, meaning both traditional demographics (e.g., race, gender) and characteristics specific to the living dead, such as varying stages of decay and different physical characteristics like loss of body parts like ears, noses or entire lower jaws.

As a result, the opponents argue, the built-in biases in the facial recognition systems cause them to misidentify living humans from demographic groups under-represented in the training sets as zombies by default.

Layla Amann, who founded the British advocacy group I Am Not a Zombie last year after the London incident, says that the systems routinely lead to misidentifications or false alarms.

“Biased facial recognition results in wrongful arrests or investigations, disproportionately affecting marginalized communities,” Amann says. “This perpetuates systemic injustices within a criminal justice system that already disadvantages those communities by providing the justification for the police to treat people from these communities in the same way that they treat the undead.”

“Zombie In, Zombie Out”

Harlan Schmidt, a London-based AI ethics expert, agrees. “Biased facial recognition perpetuates and reinforces societal biases because the technology essentially reflects and amplifies the prejudices present in the data used to train it. Zombie in, zombie out.”

Schmidt adds that addressing these issues requires a concerted effort to identify and rectify biases within the AI systems, including by ensuring the use of representative training data with a diverse set of zombies. He also argues for robust regulations to safeguard against discriminatory practices in the use of facial recognition technology to identify the undead.

In the wake of the London incident, the city’s constabulary paused the use of the AI zombie facial recognition system pending the outcome of the parliamentary inquiry. The report released this week recommended reactivating the system but adopting more stringent rules of engagement for the police responding to suspected zombie incidents on the basis of alerts from the AI algorithm.

“The police should assume that an individual is human until they have probable cause to believe that the individual is, in fact, a flesh-eating undead cannibal who presents a clear and present danger to society and to the flesh and brains of those around them,” the report states.

More Recent Posts

Note: The Bulletin of the Zombie Scientists is a work of fiction. Any resemblance to persons (living, dead or living dead), actual organizations or actual events is entirely coincidental. See our About page and our Origin Story.

Comments

Leave a comment