Tech

Developers Created AI to Generate Police Sketches. Experts Are Horrified

Police forensics is already plagued by human biases. Experts say AI will make it even worse.
Screenshot of Forensic Sketch Ai-rtist program creating a police sketch.
Image: Screenshot of Forensic Sketch AI-rtist by Sasha Luccioni

Two developers have used OpenAI’s DALL-E 2 image generation model to create a forensic sketch program that can create “hyper-realistic” police sketches of a suspect based on user inputs. 

The program, called Forensic Sketch AI-rtist, was created by developers Artur Fortunato and Filipe Reynaud as part of a hackathon in December 2022. The developers wrote that the program's purpose is to cut down the time it usually takes to draw a suspect of a crime, which is “around two to three hours,” according to a presentation uploaded to the internet

Advertisement

“We haven’t released the product yet, so we don’t have any active users at the moment, Fortunato and Reynaud told Motherboard in a joint email. “At this stage, we are still trying to validate if this project would be viable to use in a real world scenario or not. For this, we’re planning on reaching out to police departments in order to have input data that we can test this on.”

AI ethicists and researchers told Motherboard that the use of generative AI in police forensics is incredibly dangerous, with the potential to worsen existing racial and gender biases that appear in initial witness descriptions.     

“The problem with traditional forensic sketches is not that they take time to produce (which seems to be the only problem that this AI forensic sketch program is trying to solve). The problem is that any forensic sketch is already subject to human biases and the frailty of human memory,” Jennifer Lynch, the Surveillance Litigation Director of the Electronic Frontier Foundation, told Motherboard. “AI can’t fix those human problems, and this particular program will likely make them worse through its very design.”

The program asks users to provide information either through a template that asks for gender, skin color, eyebrows, nose, beard, age, hair, eyes, and jaw descriptions or through the open description feature, in which users can type any description they have of the suspect. Then, users can click “generate profile,” which sends the descriptions to DALL-E 2 and produces an AI-generated portrait. 

Advertisement

“Research has shown that humans remember faces holistically, not feature-by-feature. A sketch process that relies on individual feature descriptions like this AI program can result in a face that’s strikingly different from the perpetrator’s,” Lynch said. “Unfortunately, once the witness sees the composite, that image may replace in their minds, their hazy memory of the actual suspect. This is only exacerbated by an AI-generated image that looks more ‘real’ than a hand-drawn sketch.”

Creating hyper-realistic suspect profiles resembling innocent people would be especially harmful to Black and Latino people, with Black people being five times more likely to be stopped by police without cause than a white person. People of color are also more likely to be stopped, searched, and suspected of a crime, even when no crime has occurred. 

“If these AI-generated forensic sketches are ever released to the public, they can reinforce stereotypes and racial biases and can hamper an investigation by directing attention to people who look like the sketch instead of the actual perpetrator,” Lynch said, adding that mistaken eyewitness identifications contributed to 69 percent of wrongful convictions that were later overturned by DNA evidence in the US. Overall, false or misleading forensics—including police sketches—have contributed to almost 25 percent of all wrongful convictions across the US. 

The addition of DALL-E 2 into the already unreliable process of witness descriptions worsens the issue. Sasha Luccioni, a Research Scientist at Hugging Face who tweeted about the police sketch program, told Motherboard that DALL-E 2 contains many biases—for example it was known to display mostly white men when asked to generate an image of a CEO. Luccioni said that though these examples repeatedly crop up, we still haven’t been able to pinpoint the exact source of the biases the model has, and are thus unable to take the right measures to correct them. OpenAI continually develops methods to mitigate bias in its AI's output.

Advertisement
image (1).png

Image: Screenshot of DALL-E 2 Generation by Sasha Luccioni

“Typically, it is marginalized groups that are already even more marginalized by these technologies because of the existing biases in the datasets, because of the lack of oversight, because there are a lot of representations of people of color on the internet that are already very racist, and very unfair. It's like a kind of compounding factor,” Luccioni added. Like other AI experts, she describes the process as a feedback loop in which AI models contain, produce, and perpetuate bias as the images they generate continue to be used. 

Fortunato and Reynaud said that their program runs with the assumption that police descriptions are trustworthy and that “police officers should be the ones responsible for ensuring that a fair and honest sketch is shared.” 

“Any inconsistencies created by it should be either manually or automatically (by requesting changes) corrected, and the resulting drawing is the work of the artist itself, assisted by EagleAI and the witness,” the developers said. “The final goal of this product is to generate the most realistic drawing of a suspect, and any errors should be corrected. Furthermore, the model will most likely not produce the ideal result in just one attempt, thus requiring iterations to achieve the best result possible.”

The developers themselves admit that there are no metrics to measure the accuracy of the generated image. In a criminal case, inaccuracies may not be corrected until the suspect is found or has already spent time in jail. And just like with when police share the names and photos of suspects on social media, the sharing of an inaccurate image before then may also place suspicion on already over-criminalized populations. Critics also point out that the developers’ assumption of police neutrality ignores well-documented evidence that cops routinely lie while presenting evidence and testifying in criminal cases.

Fortunato and Reynaud’s AI tool is’t the first software to create controversy with generated images of suspects. In October 2022, the Edmonton Police Service (EPS) shared a computer generated image of a suspect that was created with DNA phenotyping, leading to backlash from privacy and criminal justice experts, and the department deleting the image from its website and social media. Again, the lack of accuracy in the dissemination of a seemingly realistic photo put innocent people at risk. “I prioritized the investigation – which in this case involved the pursuit of justice for the victim, herself a member of a racialized community, over the potential harm to the Black community. This was not an acceptable trade-off and I apologize for this,” wrote Enyinnah Okere, the chief operating officer of EPS, in a press release following the backlash. 

Last year, a report by the Center on Privacy & Technology found that AI facial recognition tools often lead to bias and error in forensic cases. The report stated that facial recognition is an unreliable source of identity evidence and the algorithm and human steps in a face recognition search may compound the others’ mistakes. “Since faces contain inherently biasing information such as demographics, expressions, and assumed behavioral traits, it may be impossible to remove the risk of bias and mistake,” the report said. 

“I think that as this technology matures, we should start developing norms of things that these models can and cannot be used for. So for me, this forensics sketch artist is very clearly something that we should not be using generative technology for,” Luccioni said. “And so no matter how well we know the biases that are in the models, there are just certain applications that it shouldn't be used for.”

OpenAI declined to comment on the record about the use of its technology in Fortunato and Reynaud’s project.