A class action lawsuit filed Thursday in New York federal court claims a breakthrough facial recognition technology designed for law enforcement use is illegally taking people’s biometric information without their consent.

The lawsuit was filed by two people from Illinois who claim the company behind the technology, Clearview AI, illegally took photos from their social media profiles and stored their biometrics – in this case, scans of their facial geometry – in a database. The plaintiffs say this violates Illinois’ Biometric Information Privacy Act.

ACTIVISTS DEMAND FACIAL RECOGNITION BAN FOR LAW ENFORCEMENT IN MAJOR NEW PUSH

This is the latest attack on Clearview AI, a new facial recognition software that can identify anyone with a single photo. The founder, Hoan Ton-That, designed the app specifically to help law enforcement agencies solve crimes. Users insert a photo into the software and it instantly brings up any photo on the Internet matching that face, with links to accompanying websites.

In a statement, the company said, "Clearview's legal team will respond to this lawsuit in due course.  The company is committed to operating within bounds of applicable laws and regulations."

Ton-That claims the app is 99.6 percent accurate in its matches and is being used by more than 600 law enforcement agencies including the Chicago Police Department, which has paid $50,000 dollars for a two-year trial.

Clearview AI estimates it already has been used in thousands of cases to help identify shoplifters, murderers and pedophiles.

MAJORITY OF AMERICANS TRUST LAW ENFORCEMENT TO USE FACIAL RECOGNITION RESPONSIBLY

“We believe that what we're doing is in the public interest,” Ton-That said in an interview with Fox News. “When these pedophiles are caught, the investigators have all these photos, hundreds and hundreds of kids and for the first time ever, they are able to identify the victims.”

Clearview AI runs photos through its database, which the company claims contains more than 3 billion photos pulled from websites.

“It searches only publicly available material out there,” Ton-That told Fox News in an interview before the lawsuit was filed. “This is public data. We're not taking any personal data ... things that are out there on the Internet, in the public domain.”

However, Google, YouTube, Facebook, Twitter, Venmo and LinkedIn have sent cease-and-desist letters to Clearview AI in an effort to shut the app down. The companies said photos users put on their accounts are not public domain and taking people’s photos, a practice known as scraping, violates their terms of service.

“It’s a little hypocritical,” Ton-That said. “Google has a lot of personal and private information. They track where you go around the web and they sell ads to you and they have your private emails. We’re not taking any personal data.”

FACIAL RECOGNITION SCANNERS AT AIRPORTS UNDER SCRUTINY AS PHILLY LAUNCHES PILOT PROGRAM

Google in 2011 opted out of pursuing facial recognition. At the time, Google's CEO Eric Schmidt acknowledged fears that mobile facial recognition could be used “in a very bad way.”

“What we’re doing is different,” Ton-That said. “We’re doing a tool for law enforcement and government to help solve crimes in the public interest.”

The 31-year-old founder told Fox News that his database only collects photos that have been posted on the Internet. But he acknowledged that includes photos posted and later deleted — even if the photo is from a social media profile that was changed to private. If it was posted publicly at all, it could be in his database.

Clearview AI was first discovered through a New York Times investigation that included sources from police departments nationwide praising the application’s effectiveness identifying suspects from surveillance images and otherwise.

“This technology is merely used to generate a lead for detectives investigating a case,” said Howard Ludwig, a spokesman for the Chicago Police Department.

The information gained using Clearview AI, he said, is “never used on their own to either detain or prosecute a suspect.”

Last fall, LinkedIn sued hiQ, a data aggregator, accusing it of violating its user agreement by scraping information from Linkedin profiles and selling that data.

But the U.S. Court of Appeals for the 9th Circuit sided with hiQ. The court wrote that giving companies like LinkedIn free rein to decide who can collect and use data publicly available to viewers, “and that the companies themselves collect and use – risks the possible creation of information monopolies that would disserve the public interest.”

The existence of the application has set off a host of questions and not everyone in law enforcement is welcoming the technology with open arms.

GOOGLE HITS PAUSE ON SELLING FACIAL RECOGNITION TECH OVER ABUSE FEARS

The attorney general of New Jersey, Gurbir Grewal, has temporarily banned the application in the state's police departments citing cybersecurity and privacy concerns, despite the fact that the application already has been used by one department to help identify a pedophile.

“Some New Jersey law enforcement agencies started using Clearview AI before the product had been fully vetted,” Grewal said in a statement. “The review remains ongoing.”

Ton-That said the application is fully protected from hacking, boasting, “we’ve never had any breaches.”

Then there are fears this application will become available to the public. Anyone at a bar, taking the train or walking along the street could have their photo taken on an iPhone and instantly identified.

Ton-That is adamant that won’t happen. “We're never going to make it a consumer app,” he said. “We don't want this to be, you know, everywhere. It has to be used in a controlled way.”

But the app has been made available to some banks and critics point out Clearview AI investors reportedly are interested in making this technology available to everyone. “You don't always have to listen to your investors,” Ton-That quipped.

Sen. Ed Markey, D-Mass., wrote a letter with a list of concerns to Clearview AI warning the product “appears to pose particularly chilling privacy risks,” particularly if this technology is made widely available, Markey said.

"It is capable of fundamentally dismantling Americans’ expectation that they can move, assemble or simply appear in public without being identified.”

Markey’s questions included concerns as to whether Clearview AI’s technology identified children (it does) and whether the software had been installed on 24/7 surveillance cameras or real-time police body cameras (it is not).

ACTIVISTS DEMAND FEDERAL BAN ON 'BIASED' FACIAL RECOGNITION TECHNOLOGY

“Well, one thing to think about is, it is not 24/7 surveillance. I think that would be a world we don't want to live in. That's how China is right now,” Ton-That said. “Nations like Russia, China and Iran who are contrary to the U.S. interests. We have no interest in doing business with them. Our customers are in the USA and we want to make sure that nothing is compromised.”

Brenda Leong, senior counsel of AI and Ethics at the Future of Privacy Forum, believes Clearview AI should not scrape people’s photos off websites. “They are stealing our personal data.”

Leong added there could be concerns about stalkers or people who are victims of domestic violence. “Maybe the pictures were uploaded by a friend and it’s a group shot, and they have no say over how that image is collected or used,” Leong said.

Asked if he believed Clearview AI is the beginning of the end of anonymity as we know it, Ton-That paused and said, “You know, people are posting information online all the time ... so what I say is maybe it’s already happened.”