Richard Drew / AP
IBM will no longer provide facial recognition technology to the police for mass surveillance and racial profiling, wrote Arvind Krishna, general manager of IBM in a letter to Congress.
Krishna wrote that such technology could be used by the police to violate “basic human rights and freedoms”, which would be contrary to the values of the company.
“We believe that the time has come for a national dialogue on whether and how facial recognition technology should be used by national law enforcement authorities,” said Krishna.
National protests following the police assassination of George Floyd have already led to changes in the country’s police services – on the use of police, police misconduct and police contracts.
The moment to count on the country’s relations with the police also comes as artificial intelligence researchers and technology specialists continue to warn against facial recognition software, in particular on how certain data-based systems have proven to be racially biased. For example, the MIT Media Lab has found that technology often fails to identify the gender of darker-skinned faces, which could lead to misidentification.
“It is a welcome acknowledgment that facial recognition technology, particularly as deployed by the police, has been used to infringe human rights and to specifically harm black people, as well as ‘Indigenous peoples and other people of color,’ said Joy Buolamwini, who led the MIT study and is the founder of the Algorithmic Justice League.
Nate Freed Wessler, a lawyer for the ACLU speech, privacy and technology project, said that while encouraged by news from IBM, other big tech companies are still loyal to the software.
“It is good that IBM has taken this step, but it cannot be the only business,” Freed Wessler told NPR. “Amazon, Microsoft and others are trying to make a lot of money by selling these dangerous and questionable tools to the police. It should stop now. ”
At IBM, Krishna, who took office as CEO in April, noted the risk of the technology producing discriminatory results by announcing the removal of “general purpose” facial recognition software.
“Artificial intelligence is a powerful tool that can help law enforcement keep citizens safe,” he wrote to congressional Democrats, who on Monday introduced a police reform law banning police federal law enforcement to use facial recognition technology. “But providers and users of AI systems have a shared responsibility to ensure that AI is tested for bias, the peculiarity when used in law enforcement, and that these tests biases are audited and reported. ”
IBM had tested facial recognition software with the New York Police Service, but adoption by other law enforcement agencies appears to be limited. Analysts following IBM noted that the company’s facial recognition had not generated much revenue, suggesting that the decision may have been commercially sound.
Critics of surveillance technology who have called on Microsoft and Amazon to make similar commitments say that using data mining tools to make public safety decisions could put citizens at risk.
“Facial recognition systems have much higher failure rates when they come from people of color and women and young people, which can expose them to great damage by the police,” said Freed Wessler of ‘ACLU. “Whether to use force, to arrest someone, to arrest someone on the street. ”
Amazon is a major player in facial recognition software. His Rekognition product has been used by local police departments in Florida and Oregon.
The ACLU discovered in 2018 that the software mistakenly identified 28 members of Congress as people arrested for crimes.
And Buolamwini discovered that when photos of several prominent black women, including Oprah and Michelle Obama, are scanned by Amazon technology, the system incorrectly declares that they are men.
Amazon has publicly defended its facial recognition software, saying studies disputing its accuracy have contained misconceptions about how the technology works.
“We know that facial recognition technology, when used irresponsibly, has risks,” wrote Matt Wood, general manager of artificial intelligence at Amazon Web Services. “But we remain optimistic about the good this technology will bring to society, and we are already seeing tangible evidence with facial recognition that helps thwart child trafficking, reunite missing children with their parents, provide better authentication of payments or reduce credit card fraud. ”
Amazon has not returned requests for comment on IBM’s decision to opt out of facial recognition software.
Microsoft, which uses facial recognition technology through its Azure cloud services, also did not return any requests for comment.
Big Tech’s use of facial recognition has sparked controversy and legal action for use beyond law enforcement.
In January, Facebook agreed to pay half a billion dollars to settle a class action lawsuit for allegedly violating Illinois consumer privacy laws in its use of facial recognition technology that used facial recognition software to guess who appears in the photos posted on the social network.