By Published: Aug. 29, 2024

Banner image: Image of a polling place generated using artificial intelligence. (Credit: Adobe Stock)

On Aug. 18, former president and current presidential candidate Donald Trump posted an unusual endorsement to his social media account on Truth Social. Amid a series of photos, he included an image of pop megastar Taylor Swift wearing an Uncle Sam hat and declaring: 鈥淭aylor wants you to vote for Donald Trump.鈥

Casey Fiesler headshot

Casey Fiesler

Kai Larsen headshot

Kai Larsen

Toby Hopp headshot

Toby Hopp

The problem: It wasn鈥檛 real. Swift hadn鈥檛, and still hasn鈥檛, endorsed a candidate for the 2024 presidential election. The image may have been generated by artificial intelligence.

Casey Fiesler, associate professor in the Department of Information Science at 抖阴旅行射 Boulder, sees the rise of AI in politics as a worrying trend. This month, for example, NPR reporter Huo Jingnan tested out Grok, a new AI platform launched by the social media company X. She was able to to create surprisingly realistic security camera images of people stuffing envelopes into ballot drop boxes in the dead of night.

鈥淚t鈥檚 not like fake images weren't a thing before this year,鈥 Fiesler said. 鈥淭he difference is that it's so much easier to do now. AI is democratizing this type of bad acting.鈥

To help voters navigate this new and perilous election information landscape, 抖阴旅行射 Boulder Today spoke to Fiesler and other experts in AI and media literacy. They include Kai Larsen, professor of information systems in the Leeds School of Business, and Toby Hopp, associate professor in the Department of Advertising, Public Relations and Media Design.听

These experts discuss how you can find out if a photo you鈥檙e seeing online is the real deal鈥攁nd how to talk to friends and family members who are spreading misinformation.

Yes, AI really is that good

In the past, AI-generated images often left behind 鈥渁rtifacts,鈥 such as hands with six fingers, that eagle-eyed viewers could spot. But those sorts of mistakes are getting easier to fix in still images. Video is not far behind, said Fiesler, who covers the ethics of AI in a course she鈥檚 teaching this fall called 鈥淓thical and Policy Dimensions of Information and Technology.鈥

鈥淎t some point soon, you will be able to see an AI-generated video of the head of the CDC giving a press conference, and it will totally fool you,鈥 she said.

At the same time, the algorithms that govern social media platforms like TikTok and Instagram can trap users in downward spirals of misinformation, Larsen said. He鈥檚 the co-author of the 2021 book .

鈥淎lgorithms, at least historically, have been driving people into these rabbit holes,鈥 Larsen said. 鈥淚f you are willing to believe one piece of misinformation, then the algorithm is now finding out that you like conspiracy theories. So why not feed you more of them?鈥

Tech probably won鈥檛 save us

A range of companies now offer services that, they claim, can detect AI-generated content, including fake images. But just like human eyes, those tools can be easily tricked, Larsen said. Some critics of AI have also urged tech companies to add digital 鈥渨atermarks鈥 to AI content. These watermarks would flag photos or text that had originally come from an AI platform.听

鈥淭he problem with watermarks is that they are often fairly easy to get rid of,鈥 Larsen said. 鈥淥r you can just find another large language model that doesn鈥檛 use them.鈥滸oogle it

When it comes to AI images, a little searching online can go a long way, Fiesler said.

Screenshot of a Truth Social post from Donald J. Trump in which he says "I accept!" above a series of images.

Click to enlarge
In his post to Truth Social, Trump included several AI-generated images of fake Trump supporters, alongside a real photo (upper right) of a woman wearing a "Swifties for Trump" T-shirt.

Screenshot of an X post featuring an image of a woman seen from behind speaking to a crowd with a hammer and sickle communist flag hanging above

Click to enlarge
In a post to X, Trump shared an AI-generated image of a woman, who looks like Kamala Harris, speaking to a gathering of Communists.

Earlier in August, Trump鈥檚 campaign accused Kamala Harris鈥 team of using AI to make the crowd size look bigger in a photo of one of her rallies. Fiesler ran a quick Google image search on the image. She discovered that numerous news organizations had covered the same event, and dozens of other photos and videos existed, all showing the same large crowd.

鈥淕oogle it,鈥 Fiesler said. 鈥淔ind out: Are news organizations writing about this same event? Do other photographs exist?鈥

Hopp, a scholar who studies fake news or what he prefers to call 鈥渃ountermedia,鈥 cautions social media users to beware of posts that try to trigger our worst impulses. In 2016, troll farms in Russia posted thousands of misleading ads about the presidential election to social media. Many tried to tap into negative emotions, pitting Americans on the right and left against each other.

鈥淲e can evaluate a piece of information and ask ourselves: 鈥業s this trying to make me angry? Is this trying to make me upset?鈥欌 Hopp said. 鈥淚f so, we may want to ask: 鈥業s there a possibility that this might be misleading?鈥欌

What about friends and family?

It鈥檚 a familiar problem for many people鈥攁 friend or family member who won鈥檛 stop sharing misleading social media posts. Dealing with that kind of loved one can be a minefield, Hopp said. Research shows that simply challenging people on their false beliefs (say, that the Earth is flat) often won鈥檛 change their minds. It may even make them double down.

He and other researchers have experimented with giving social media users 鈥溾 or basic info on how to tell fact from fiction. Such interventions can help, but not as much as Hopp would hope.听

鈥淚 do think that sober, empathetic and caring discussions with those who are important to us about media literacy can be important for helping people use different strategies when they're on social media platforms,鈥 he said. 鈥淏ut there鈥檚 no silver bullet.鈥

What can we do in the long-term?

Can anything help to slow the spread of misleading AI images online?

Fiesler sees an urgent need for the federal government to step in to regulate the rapidly growing AI industry. She said that a starting point could be the which the White House鈥檚 Office of Science and Technology Policy drafted in 2022.听

This blueprint, which has not been passed into law, includes recommendations like: 鈥淵ou should know that an automated system is being used and understand how and why it contributes to outcomes that impact you.鈥

Hopp, for his part, believes that a lot of the responsibility for stopping political misinformation comes down to another group: politicians. It鈥檚 time to cool down the temperature of political discussions in the United States.

鈥淭here鈥檚 a role for our political leaders to discourage the use of hyper-partisan, divisive information,鈥 Hopp said. 鈥淓mbracing this type of misleading information creates conditions that are fairly ripe for its spread.鈥