Stanford professor getting death threats over 'gaydar' research

Women kiss during a gay pride march in Podgorica, Montenegro, on Sept. 23, 2017. (AP Photo/Risto Bozovic)

"Our findings expose a threat to the privacy and safety of gay men and women," wrote Michal Kosinski in a paper set to be published by the Journal of Personality and Social Psychology—only he's the one now finding himself in danger.

The New York Times takes a look at the quagmire Kosinski finds himself in following his decision to try—and, in some fashion, succeed—at building what many are referring to as "AI gaydar." The Stanford Graduate School of Business professor tells the Times he decided to attempt to use facial recognition analysis to determine whether someone is gay to flag how such analysis could reveal the very things we want to keep private.

Now he's getting death threats. The Times delves into the research—first highlighted by the Economist in early September—and the many bones its many critics have to pick with it.

Kosinski and co-author Yilun Wang pulled 35,000 photos of white Americans from online dating sites (those looking for same-sex partners were classified as gay) and ran them through a "widely used" facial analysis program that turns the location, size, and shape of one's facial characteristics into numbers.

Humans who looked at the photos correctly identified a woman as gay or straight 54% of the time, and men 61%; the program, when given five photos per person, got it right 83% of the time for women and 91% for men.

One critic explains that while 91% might sound impressive, it's not. In a scenario where 50 out of every 1,000 people are gay, the program would identify 130 as gay (.91 correct times 50 and .09 wrong times 950); it would be right about 45 of those people, and wrong about 85.

Read much more from Kosinski's detractors here.

This article originally appeared on Newser: His Quest to Create 'Gaydar' Had Unintended Consequences