Secretary of state’s advertising campaign fights AI-driven election misinformation
In preparation for the November election, Secretary of State Maggie Toulouse Oliver has launched a statewide campaign to educate voters on the risks of artificial intelligence and deepfakes in elections, aiming to raise awareness and safeguard the integrity of the electoral process.
The campaign, which launched in May, aims to help voters make informed decisions in 2024 and beyond.
“With the creation of deepfakes and other manipulated media through AI software, seeing is no longer believing,” Toulouse Oliver said. “We recommend that voters only access information from trusted sources – like their county clerk or the Secretary of State’s Office – while remaining skeptical of material from unknown entities. As the campaign’s tagline states, When In Doubt Check It Out.”
The campaign builds on previous work by the secretary of state, like the Rumor vs. Reality website and Your Vote Counts, New Mexico!, which highlighted county election officials’ efforts and educated voters about New Mexico’s election integrity.
The Seeing is No Longer Believing campaign was promoted across social media, television, radio, outdoor billboards, and print publications. The Secretary of State’s Office also created a dedicated webpage to help voters learn about the campaign and recognize AI-generated images, videos and audio clips designed to manipulate voters.
Other AI news
U.S. Sen. Martin Heinrich (D) was recently named one of Time Magazine’s 100 Most Influential People in AI. He co-founded the bipartisan Senate Artificial Intelligence Caucus in 2019 and serves as its co-chair alongside U.S. Sen. Mike Rounds (R-SD).
Heinrich is a co-author of several congressional AI-related measures, including the Roadmap for (AI) Policy in the U.S. Senate, which was released by the Senate AI Working Group in collaboration with Senate Majority Leader Chuck Schumer (D-NY) and Sen. Mike Rounds (R-SD).
“This is very fast-moving technology, and the opportunities for great things are enormous, but the opportunities for abuse are also substantial,” Heinrich said in the Time Magazine article.
According to the Associated Press, California Gov. Gavin Newsom signed three bills Sept. 17, that take effect immediately and are aimed at curbing the use of artificial intelligence to create misleading images or videos in political ads ahead of the 2024 election.
California’s new laws make it illegal to create or publish election-related deepfakes within 120 days before and 60 days after Election Day, require large social media platforms to remove deceptive material starting next year and mandate that political campaigns disclose if their ads include AI-altered materials.
The Secretary of State’s Office released these tips for identifying AI generated or altered material:
In photos:
- Irregularities in human features: Look at the hands, fingers and facial symmetry. AI generators struggle with these features.
- Inconsistent shadows and lighting: Inconsistent shadows or lighting that doesn’t match the light source are common in AI images.
- Odd facial features: Look for asymmetry or irregularities in faces, such as odd eye placement, ears at different heights or noses and mouths that seem out of alignment.
- Too much perfection: AI images often have a plastic or overly perfect feel. They lack the imperfections and subtle variations of real images.
In videos:
- Strange shadows, blurring, and flickering lights: The light in AI-generated videos often won’t follow natural patterns.
- Inconsistencies at the edges of people’s faces: AI-generated videos often use face swapping, which can make the edges of a person’s face appear distorted.
- Skin that appears too smooth or too wrinkly: Often, the textures in an AI-generated video do not look realistic. Look at people’s foreheads and cheeks to see if they match the rest of the face.
- Too much or not enough blinking: AI struggles to generate realistic facial expressions. Odd blinking patterns are a telltale sign that a video has been generated using AI.
In audio:
- Slurred words: With just a recording of someone speaking for just a few minutes, AI tools can generate a realistic audio clip of their voice saying anything. However, it may slur over words not used in the original audio clip.
- Flat, dry tone: AI can struggle to replicate appropriate human emotion in voices, resulting in a monotone voice.
- Background noise: AI-generated audio will often have extra noise in the background. It can sound like a recording that was made with a low-quality microphone.