Sexual predators are using a powerful new tool to exploit children -- AI image generators. Users on a single dark-web forum shared nearly 3,000 AI-generated images of child sexual abuse in just one month, according to a recent report from the UK-based Internet Watch Foundation.
Unfortunately, current child sexual abuse laws are outdated. They don't adequately account for the unique dangers AI and other emerging technologies pose. Lawmakers must act fast to put legal protections in place.
The national CyberTipline -- a reporting system for suspected online child exploitation -- received a staggering 32 million reports in 2022, up from 21 million just two years prior. That already disturbing figure is sure to grow with the rise of image-generating AI platforms.
AI platforms are "trained" on existing visual material. Sources used to create images of abuse may include real children's faces taken from social media, or photographs of real-life exploitation. Given the tens of millions of abusive images online, there is an almost inexhaustible amount of source material from which AI can generate even more harmful images.
CLICK HERE FOR MORE FOX NEWS OPINION
The most advanced AI-generated images are now virtually indistinguishable from unaltered photographs. Investigators have found new images of old victims, images of "de-aged" celebrities who are depicted as children in abuse scenarios, and "nudified" images taken from otherwise benign photos of clothed children.
The scope of the problem is increasing every day. Text-to-image software can easily create images of child abuse based on whatever the perpetrator wants to see. And much of this technology is downloadable, so offenders can generate images off-line without fear of discovery.
Using AI to create pictures of child sex abuse is not a victimless crime. Behind every AI image, there are real children. Survivors of past exploitation are re-victimized when new portrayals are created using their likeness. And studies show that a majority of those who possess or distribute child sex abuse material also commit hands-on abuse.
TOP LAWMAKER ON AI WORKING GROUP SAYS PRIVACY REGULATIONS SHOULD BE A PRIORITY FOR CONGRESS
Adults can also use text-generating AI platforms like ChatGPT to better lure children, updating an old tactic. Criminals have long used fake online identities to meet young people in games or on social media, gain their trust and manipulate them into sending explicit images, then "sextort" them for money, more pictures, or physical acts.
But ChatGPT makes it shockingly easy to masquerade as a child or teen with youthful language. Today's criminals can use AI platforms to generate realistic messages with the goal of manipulating a young person into engaging in an online interaction with someone they think is their own age. Even more terrifying, many modern AI tools have the capacity to quickly "learn" -- and therefore teach people -- which grooming techniques are the most effective.
President Biden recently signed an executive order geared at managing the risks of AI, including protecting Americans' privacy and personal data. But we need help from lawmakers to tackle AI-assisted online child abuse.
For starters, we need to update the federal legal definition of child sexual abuse material to include AI-generated depictions. As the law currently stands, prosecutors must show harm to an actual child. But this requirement is out of step with today's technology. A defense team could feasibly claim that AI child sexual abuse material is not depicting a real child and therefore isn't harmful, even though we know that AI generated images often pull from source material that victimizes real children.
Second, we must adopt policies requiring tech companies to continuously monitor and report exploitative material. Some companies proactively scan for such images, but there's no requirement that they do so. Only three companies were responsible for 98% of all CyberTips in 2020 and 2021: Facebook, Google, and Snapchat.
Many state child sex abuse laws identify "mandatory reporters," or professionals like teachers and doctors who are legally required to report suspected abuse. But in an era in which we live so much of our lives online, employees of social media and other tech companies ought to have similar legally mandated reporting responsibilities.
Finally, we need to rethink how we use end-to-end encryption, in which only the sender and receiver can access the content of a message or file. While it has valid applications, like banking or medical records, end-to-end encryption can also help people store and share child abuse images. To illustrate just how many abusers could go undetected, consider that out of the 29 million tips the CyberTipline received in 2021, just 160 came from Apple, which maintains end-to-end encryption for iMessages and iCloud.
CLICK HERE TO GET THE FOX NEWS APP
Even if law enforcement has a warrant to access a perpetrator's files, a tech company with end-to-end encryption can claim that it can't access those files and can't help. Surely an industry built on innovation is capable of developing solutions to protect our children -- and making that a priority.
AI technology and social media are evolving every day. If lawmakers act now, we can prevent wide-scale harm to kids.
Teresa Huizar is CEO of National Children's Alliance, America's largest network of care centers for child abuse victims.