Skip to Content, Navigation, or Footer.
The Daily Lobo The Independent Voice of UNM since 1895
Latest Issue
Read our print edition on Issuu
AI2.jpg

New Mexico Attorney General Raúl Torrez speaks at a press conference on Thursday, Jan. 15, where he and State Representative Linda Serrato (D-45) announced they would be introducing legislation to regulate artificial intelligence.

NM AG announces legislation to protect victims of AI deepfakes

On Thursday, Jan. 15, New Mexico Attorney General Raúl Torrez and State Representative Linda Serrato (D-Santa Fe) announced legislation meant to curb the spread of harmful images generated by artificial intelligence. The legislation will be voted on during the upcoming legislative session, which begins on Jan. 20. 

The “Artificial Intelligence Accountability Act” would establish requirements that generative AI servers and social media platforms embed markers or “signatures” into images, allowing law enforcement to trace illegal AI generated content back to its source, as well as allow the New Mexico Department of Justice to investigate tech companies for infractions, Torrez said during a press conference at the New Mexico Department of Justice office in Albuquerque. 

The legislation would impose fines up to $15,000 per violation for every day a company is out of compliance, according to a fact sheet released by New Mexico Department of Justice. 

“Artificial intelligence is rapidly becoming a part of everyday life. It is being utilized by businesses. It’s being utilized by everyday citizens. We utilize it to some extent in this building,” Torrez said. “But like all technology, it is something that can be misused and abused. And one of the most damaging aspects of artificial intelligence today is the ease with which people can create malicious, deepfake content.” 

A deepfake is a video, photo or audio recording that seems real but has been manipulated with AI, according to the U.S. Government Accountability Office. AI can replace faces, manipulate facial expressions, synthesize faces and synthesize speech, with deepfakes depicting someone appearing to say or do something that they never said or did.

The legislation also creates civil liability for individuals who produce and knowingly reshare malicious deepfakes, and allows victims of deepfakes to sue perpetrators, including recovery of damage or $1,000 per impression, Torrez said. 

“That is a substantial penalty for a malicious deepfake that racks up a million likes or a million clicks or is reshared that many times. But I think that is setting a penalty that is appropriate to deter that kind of behavior, given the nature of the harm we’re seeing,” Torrez said. 

The legislation allows an additional year to be added to a felony sentence if AI was used in connection to the felony, according to the fact sheet. 

“I think we all have a basic sense of what it means to be accountable for our actions, and accountable for our choices. One of the difficult things, because of this technology, is the way in which it allows anonymous people to create real harm to others,” Torrez said. “Because of that, we are proud to announce our proposal, which will be the first AI accountability act in the state of New Mexico, and it provides certain core elements that we think are essential to protecting our people.”

The legislation announcement follows the arrest of Richard Gallagher for possession and dissemination of AI-generated child sexual abuse material, which he allegedly created using AI “undressing” tools to convert clothed photos of children to unclothed photos, according to the New Mexico DOJ.

“This is the first instance as far as we are aware of someone who actually used artificial intelligence to generate images of sexual exploitation by using publicly available images of children and then modifying those images in a way which is truly horrific and will create lasting harm,” Torrez said. “In a broader sense, I think it is a significant turning point and should hopefully serve as a wake up call for all of us — for policymakers, for community leaders, for parents and educators.”

Serrato said the full bill text will be available at the start of the legislative session on Jan. 20, where it is up for discussion and voting.

Enjoy what you're reading?
Get content from The Daily Lobo delivered to your inbox
Subscribe

“Our job, as leaders in the state, is to make sure that (New Mexicans) have a sense of comfort and protection. That is our New Mexico values. I have talked to other states and too few folks come from a point of protection for us oftentimes, because we all want to see our states thrive,” Serrato said. “We want to be the leaders in different things. But this is an opportunity for New Mexico to show how we can create an ecosystem where AI works for us. It’s not about just making it better for business it is or for those who are profiting off of it.”

Torrez said he was confident in the bill and the New Mexico DOJ to implement protections despite the Trump administration’s recent executive order “Ensuring a National Policy Framework for Artificial Intelligence,” which aims in part to reduce state-by-state regulations, he said. 

The order, signed by President Trump on Dec. 11, 2025, directs his administration to “(take) action to check the most onerous and excessive laws emerging from the States that threaten to stymie innovation” in regard to regulating AI, including by potentially withholding federal funding for broadband programs from states with such regulations in place.

Adam Billen, vice president of Encode, a nonprofit focused on child safety and threats posed by AI, told NPR in December 2025 that the order could have a “chilling effect” on states’ willingness to regulate AI. 

Torrez said the state of New Mexico is empowered by the U.S. Constitution to protect its citizens and the federal government only preempt a state action with a bill passed by Congress that explicitly prevents states taking action. 

“There are a number of red states, very conservative states that are pushing back against the administration because they recognize that this is not a form of technology that can be frankly left up to tech companies and leaders of those companies to police themselves,” Torrez said.
”I certainly would hope that the substance of what we are trying to achieve and accomplish would be reviewed by the Trump administration before they undertake any action, but I can tell you that we stand prepared to withstand any legal challenge that’s brought.”

Addison Fulton is the culture editor for the Daily Lobo. She can be reached at culture@dailylobo.com or on X @dailylobo

Comments
Powered by SNworks Solutions by The State News
All Content © 2026 The Daily Lobo