Amazon will prevent police from using its facial recognition technology for a year pending federal regulation of the surveillance tool, the company said in a blog on Wednesday.
The announcement follows years of pressure from police reform advocates and privacy activists, including the American Civil Liberties Union, to stop marketing its tool. facial recognition to the police, fearing that it would be racially biased and could be used to construct an oppressive system to automate the identification and tracking of anyone.
“Face recognition technology gives governments unprecedented power to spy on us wherever we go. This is fueling police abuse, “said Nicole Ozer, director of technology and civil liberties at the ACLU of Northern California, in response to Amazon’s announcement. This surveillance technology must be stopped. ”
Amazon said it has advocated for governments to put in place “tougher regulations to govern the ethical use of facial recognition technologies”, noting that Congress “appears to be up to the challenge.”
“We hope this one-year moratorium could give Congress enough time to implement the appropriate rules, and we are ready to help if necessary,” the company said.
Liz O’Sullivan, privacy activist and founder or Arthur AI, described the announcement as a “victory for activists and academics” who have been calling for more stringent facial recognition regulations for years. But she noted that it was “an admission that the entire surveillance system is flawed, biased and has racial implications.”
“We must ensure that this moratorium turns into a permanent ban,” she said, calling on activists and members of the public to pressure their local decision-makers to ensure that any regulation “serves citizens against interests enterprises “.
Amazon Rekognition’s one-year moratorium on police use does not include organizations that work closely with law enforcement to identify victims of child sexual abuse and human trafficking , such as the nonprofit Thorn, the National Center for Missing and Exploited Children and Marinus Analytics.
In July 2018, the ACLU conducted an Amazon Rekognition test and found that it incorrectly corresponded to 28 members of Congress, identifying them as other people who had been arrested for a crime.
At the time, Amazon said the ACLU had set the system’s “confidence rate” lower than the recommended level, leading to a higher number of false positives.
“Machine learning is a very valuable tool in helping law enforcement, and while concerned about its proper application, we should not throw out the oven as the temperature could be set incorrectly and burn the pizza” , the company said in a July 2018 press release.
The announcement followed a similar IBM commitment on Monday when the company’s CEO, Arvind Krishna, wrote a letter to Congress stating that he would not develop or seek more facial recognition technology.
Krishna said society “strongly opposes” the use of facial recognition technology for “mass surveillance, racial profiling, violations of basic human rights and freedoms”.
“We believe that the time has come for a national dialogue on whether and how facial recognition technology should be used by national law enforcement authorities,” he said.