IBM will no longer provide facial recognition technology to police departments for mass surveillance and racial profiling, Arvind Krishna, IBM's chief executive, wrote in a letter to Congress.
Krishna wrote that such technology could be used by police to violate "basic human rights and freedoms," and that would be out of step with the company's values.
"We believe now is the time to begin a national dialogue on whether and how facial recognition technology should be employed by domestic law enforcement agencies," Krishna said.
The nationwide demonstrations following the police killing of George Floyd already have led to changes to police departments around the country - over use of force policies, police misconduct and police contracts.
The moment of reckoning over the country's relationship with law enforcement also comes as artificial-intelligence researchers and technology scholars continue to warn about facial recognition software, particularly how some of the data-driven systems have been shown to be racially biased. For instance, the MIT Media Lab has found that the technology is often less successful at identifying the gender of darker-skinned faces, which could lead to misidentifications.
"This is a welcome recognition that facial recognition technology, especially as deployed by police, has been used to undermine human rights, and to harm Black people specifically, as well as Indigenous people and other People of Color," said Joy Buolamwini, who conducted the MIT study and is the founder of the Algorithmic Justice League.
Nate Freed Wessler, a lawyer with the ACLU's Speech, Privacy, and Technology Project, said while he was encouraged by the news from IBM, other major technology companies are still standing by the software.
"It's good that IBM took this step, but it can't be the only company," Freed Wessler told NPR. "Amazon, Microsoft and other corporations are trying to make a lot of money by selling these dangerous, dubious tools to police departments. That should stop right now."
At IBM, Krishna, who took over as CEO in April, noted the technology's risk of producing discriminatory results in his announcement dropping "general purpose" facial recognition software.
"Artificial Intelligence is a powerful tool that can help law enforcement keep citizens safe," he wrote to Congressional Democrats, who introduced police reform legislation on Monday that would ban federal law enforcement from using facial recognition technology. "But vendors and users of AI systems have a shared responsibility to ensure that AI is tested for bias, particularity when used in law enforcement, and that such bias testing is audited and reported."
IBM had tested facial recognition software with the New York Police Department, but its adoption by other law-enforcement agencies appears to be limited. Analysts who track IBM have noted that the company's facial recognition did not pull in much revenue, suggesting that the decision perhaps made good business sense.
Critics of the surveillance technology who have called on Microsoft and Amazon to make similar commitments say relying on data-mining tools to make public-safety decisions could endanger citizens.
"Face recognition systems have much higher failure rates when it comes from people of color and women and younger people, which can them subject them to great harm by police," the ACLU's Freed Wessler said. "Whether to use force, whether to arrest someone, whether to stop someone on the street."
Amazon is a major player in facial recognition software. Its Rekognition product has been used by local police departments in Florida and Oregon.
The ACLU found in 2018 that the software mistakenly identified 28 members of Congress as people who had been arrested for crimes.
And Buolamwini has discovered that when the photos of several prominent black women, including Oprah and Michelle Obama, are scanned by Amazon's technology that the system wrongly states that they are men.
Amazon has publicly defended its facial recognition software, saying studies challenging its accuracy have contained misperceptions about how the technology operates.
"We know that facial recognition technology, when used irresponsibly, has risks," wrote Matt Wood, general manager of artificial intelligence at Amazon Web Services. "But we remain optimistic about the good this technology will provide in society, and are already seeing meaningful proof points with facial recognition helping thwart child trafficking, reuniting missing kids with parents, providing better payment authentication, or diminishing credit card fraud.
Amazon did not return a request for comment on IBM's decision to walk away from facial recognition software.
Microsoft, which uses facial recognition technology through its Azure cloud computing services, also did not return a request for comment.
Big Tech's use of facial recognition has sparked controversy and legal action for useage beyond law enforcement.
In January, Facebook agreed to pay half a billion dollars to settle a class action lawsuit for allegedly violating Illinois consumer privacy laws in its use of facial recognition technology that used face-matching software to guess who appears in photos posted to the social network.