Minnesota advances bill that criminalizes sharing deepfake sexual images, content to influence elections
The MN Senate passed the bill in a nearly unanimous vote
{{#rendered}} {{/rendered}}
- The Minnesota Senate has approved a bill that would make disseminating certain deepfakes illegal as artificial intelligence technology has become easier to use now more than ever.
- Under the Minnesota measure, people would be criminalized if they distribute AI-generated images that contain pornography without consent and political misinformation that can hurt a political candidate or influence an election.
- The measure must go through a conference committee and get signed by Minnesota Gov. Tim Walz to become law.
In a nearly unanimous vote, Minnesota Senate lawmakers passed a bill Wednesday that would criminalize people who non-consensually share deepfake sexual images of others, and people who share deepfakes to hurt a political candidate or influence an election.
Deepfakes are videos and images that have been digitally created or altered with artificial intelligence or machine learning. Deepfake pornography and political misinformation have been created with the technology since it first began spreading across the internet several years ago. That technology is easier to use now than ever before.
The bill would allow prosecutors to charge people with up to five years in prison and $10,000 in fines for disseminating deepfakes. To become law, the bill must still go through a conference committee and get signed by Democratic Gov. Tim Walz.
{{#rendered}} {{/rendered}}
REGULATE AI? GOP MUCH MORE SKEPTICAL THAN DEMS THAT GOVERNMENT CAN DO IT RIGHT: POLL
Only one lawmaker voted against the bill on Wednesday.
"The concern I have is just the civil penalty. I want to see it higher," Republican Sen. Nathan Wesenberg, of Little Falls, said on the Senate floor before voting against the bill.
{{#rendered}} {{/rendered}}
Supporters said the bill is cutting-edge and necessary.
"We need to protect all Minnesotans who might become victims of those that seek to use technology or artificial intelligence to threaten, harass, or ... humiliate anybody," Republican Sen. Eric Lucero, of St. Michael, said in support.
A small handful of other states have passed similar legislation to combat deepfakes, said Democratic Sen. Erin Maye Quade, the Apple Valley lawmaker who championed the bill. Those states include Texas, California and Virginia.
{{#rendered}} {{/rendered}}
OPENAI SUGGESTS VOLUNTARY AI STANDARDS, NOT GOVERNMENT MANDATES, TO ENSURE AI SAFETY
"I think we're really behind at the federal level and the state level" on data privacy and technology regulation, Maye Quade said. "Just watching the advancement of AI technology, even in the last year, had me really concerned that we didn't have anything in place."
In a January video, President Joe Biden talked about tanks. But a doctored version of the video amassed hundreds of thousands of views that week on social media, making it appear like he gave a speech that attacked transgender people.
{{#rendered}} {{/rendered}}
CLICK HERE TO GET THE FOX NEWS APP
Digital forensics experts said the video was created using a new generation of artificial intelligence tools, which allow anyone to quickly generate audio simulating a person’s voice with a few clicks of a button. And while the Biden clip on social media may have failed to fool most users, the clip showed how easy it now is for people to generate hateful and disinformation-filled deepfake videos that could do real-world harm.
Some social media companies have been tightening up their rules to better protect their platforms against deepfakes.
{{#rendered}} {{/rendered}}
TikTok said in March that all deepfakes or manipulated content showing realistic scenes must be labeled to indicate they are fake or altered in some way, and that deepfakes of private figures and young people are no longer allowed. Previously, the company had barred sexually explicit content and deepfakes that mislead viewers about real-world events.