Ocasio-Cortez slams Amazon for 'bias' in its facial detection technology
{{#rendered}} {{/rendered}}
Rep. Alexandria Ocasio-Cortez slammed Amazon for what she characterized as "bias" in the tech giant's facial detection technology.
The freshman congresswoman told her 2.7 million Twitter followers: "When you don't address human bias, that bias gets automated. Machines are reflections of their creators, which means they are flawed, & we should be mindful of that. It's one good reason why diversity isn't just 'nice,' it's a safeguard against trends like this."
The study Ocasio-Cortez referenced in her tweet, which was conducted by researchers from MIT and the University of Toronto, found that Amazon's technology labeled darker-skinned women as men 31 percent of the time. Lighter-skinned women were misidentified 7 percent of the time.
{{#rendered}} {{/rendered}}
APPLE SCRAMBLES TO FIX FACETIME BUG THAT LETS PEOPLE SPY ON YOU
Matt Wood, general manager of artificial intelligence with Amazon's cloud-computing unit, told the Associated Press that the study uses a "facial analysis" and not "facial recognition" technology. Wood said facial analysis "can spot faces in videos or images and assign generic attributes such as wearing glasses; recognition is a different technique by which an individual face is matched to faces in videos and images."
Wood also said Amazon has updated its technology since the study — which ran tests in August of last year — and its own analysis showed "zero false positive matches."
{{#rendered}} {{/rendered}}
However, MIT lab researcher Joy Buolamwini responded in a post on Medium that the public "cannot rely on Amazon to police itself or provide unregulated and unproven technology to police or government agencies."
"The terminology used in the field is not always consistent, and you might see terms like “face recognition” or “facial recognition” being used interchangeably," Buolamwini, who is pursuing a PhD focused on participatory AI at MIT Media Lab's Center for Civic Media, wrote on Medium. "Often times companies like Amazon provide AI services that analyze faces in a number of ways offering features like labeling the gender or providing identification services. All of these systems regardless of what you call them need to be continuously checked for harmful bias."
{{#rendered}} {{/rendered}}
Amazon has been criticized by lawmakers, civil liberties groups and shareholders over its facial recognition technology, which is known as "Rekognition."
A group of shareholders put out a statement saying that investors are worried the facial recognition technology will be used by local and federal government agencies "to justify the surveillance, exploitation, and detention of individuals seeking to enter the U.S." and it urges Amazon to work with civil liberties and human rights experts to assess the program's impact.
A 2018 ACLU study, which Amazon disputes, showed that the company's facial recognition technology wrongly tagged 28 members of Congress as police suspects, further fueling concerns about racial bias.
{{#rendered}} {{/rendered}}
In its public blog, Amazon says facial recognition technology should not be a substitute for human judgment.
"In all public safety and law enforcement scenarios, technology like Amazon Rekognition should only be used to narrow the field of potential matches. The responses from Amazon Rekognition allow officials to quickly get a set of potential faces for further human analysis. Given the seriousness of public safety use cases, human judgment is necessary to augment facial recognition, and facial recognition software should not be used autonomously," the company says.
Amazon also notes that facial recognition technology, which is already in use at some U.S. airports to more quickly move travelers through security, can also be used to identify victims of human trafficking and to prevent fraud when customers use certain financial services apps.
{{#rendered}} {{/rendered}}
CHINA DEVELOPS APP TO MONITOR 'DEADBEAT DEBTORS'
Technologists, activists and its own employees are likely to keep the pressure on Amazon. Algorithms, which power everything from Siri to the results you see in Google's search engine, can be misused in ways that send innocent people to jail.
“Algorithms are still made by human beings, and those algorithms are still pegged to basic human assumptions,” Ocasio-Cortez, who also recently said "tech monopolies" are one of the biggest threats to journalism, told the writer Ta-Nehisi Coates at the annual MLK Now event. “They’re just automated assumptions. And if you don’t fix the bias, then you are just automating the bias.”
{{#rendered}} {{/rendered}}
Data scientist Emily Gorcenski defended Ocasio-Cortez's comments about biased algorithms.
"This phenomenon is well recognized in the field," Gorcenski wrote on Twitter. "Not only is the phenomenon well understood; its fundamental behaviors are critical to being an effective data scientist. All data scientists acknowledge that algorithms will overfit to biases without controls. There’s an entire subfield dedicated to studying ways to avoid this."
The Associated Press contributed to this report.