Site icon Technology Shout

Trump’s use of AI images pushes new boundaries, further eroding public trust, experts say

LOS ANGELES (AP) — The Trump administration has not shied away from sharing artificial intelligence-generated images online, employing cartoon-like visuals and memes and promoting them on official White House channels.

But edited, real-life photos of civil rights attorney Nekima Levy Armstrong bursting into tears after her arrest sparked new alarm about how the government is blurring the lines between truth and falsehood.

Homeland Security Secretary Kristi Noem’s account posted the original image of Levi Armstrong’s arrest, and then the official White House account posted a modified image showing her crying. The doctored image is part of a slew of AI-edited images that have been shared across the political spectrum since U.S. Border Patrol agents fatally shot Renee Goode and Alex Pretty in Minneapolis.

The White House’s use of AI, however, has troubled misinformation experts, who worry that the spread of AI-generated or edited images could erode the public’s view of the truth and sow distrust.

In response to criticism of Levi Armstrong’s edited image, White House officials doubled down on the post, with deputy communications director Kaelan Dorr writing on X that “the memes are here to stay.” White House deputy press secretary Abigail Jackson also posted a post mocking the criticism.

David Rand, a professor of information science at Cornell University, said calling the altered images memes “certainly appears to be an attempt to frame them as jokes or humorous posts, like their previous cartoons. It may be an attempt to protect them from criticism for posting manipulative media.” He said the purpose of sharing the altered arrest images appears to be “much vaguer” than cartoon images the government has shared in the past.

Memes always carry layered messages that are funny or informative to those who understand them, but incomprehensible to outsiders. Zach Henry, a Republican communications consultant and founder of influencer marketing firm Total Virality, said AI-enhanced or edited images are just the latest tool the White House is using to appeal to Trump supporters who spend a lot of time online.

“Eventually someone going online will see it and immediately recognize it as a meme,” he said. “Your grandparents might see it and not understand the meme, but because it seems so real, they’ll ask their children or grandchildren about it.”

Henry said it would be better if it elicited a strong reaction that helped it spread like a virus. He generally praised the work of the White House social media team.

The creation and dissemination of altered images, especially when they are shared by reliable sources, “reifies the idea of ​​what is happening rather than showing what is actually happening,” said Michael A. Spikes, a Northwestern University professor and news media literacy researcher.

“Government should be a place where you can trust information and you can say it’s accurate because they have a responsibility to do so,” he said. “By sharing this kind of content, by creating this kind of content… it’s eroding trust – although I’m always leery of the word trust – the trust we should have in the federal government to give us accurate, verified information. It’s a real loss and it really worries me.”

Spikes said he has seen an “institutional crisis” around distrust of news organizations and higher education and believes such behavior from official channels exacerbates those problems.

Ramesh Srinivasan, a UCLA professor and host of the Utopia podcast, said many people are now questioning where they can turn for “credible information.” “AI systems will only exacerbate, amplify and accelerate these issues of lack of trust or even understanding of what can be considered reality, truth or evidence,” he said.

Srinivasan said he believes the sharing of AI-generated content by the White House and other officials not only invites ordinary people to continue posting similar content, but also allows people with credibility and power, such as policymakers, to share unlabeled synthetic content. He added that given the tendency of social media platforms to “algorithmically prioritize” extreme and conspiratorial content, which can be easily created by AI-generated tools, “we face a huge set of challenges.”

AI-generated videos related to Immigration and Customs Enforcement operations, protests, and interactions with citizens have proliferated on social media. After Renee Good was shot and killed by ICE officers in her car, several AI-generated videos began circulating showing some women fleeing in their cars as ICE officers asked them to stop. There are also numerous fabricated videos circulating about immigration raids and videos of people confronting ICE officers, often yelling at them or throwing food at them.

Jeremy Carrasco, a content creator who specializes in media literacy and debunking viral AI videos, said most of these videos likely come from Engagement Agriculture accounts or are looking to capitalize on clicks by generating content with popular keywords and search terms like ICE. But he also said the videos have received attention from people opposed to ICE and the Department of Homeland Security, who may view them as “fan fiction” or engage in “wishful thinking” in the hope that they will see real pushback against the organizations and their officials.

Still, Carrasco also believes that most viewers can’t tell if what they’re watching is fake, and questions whether they will know “what is or isn’t real when things really matter, like when the stakes are much higher.”

Even if there are obvious signs of AI generation, such as garbled street signs or other obvious errors, only in the “best case” scenario will viewers be savvy enough or take it seriously enough to record the use of AI.

Of course, the issue isn’t limited to news about immigration enforcement and protests. Fabricated and distorted images surfaced online following the arrest of deposed Venezuelan leader Nicolás Maduro earlier this month. Experts, including Carrasco, believe the spread of AI-generated political content will only become more common.

Carrasco believes that widespread implementation of watermarking systems, which embed information about a media’s source into its metadata layer, could be a step toward a solution. The Content Provenance and Authenticity Alliance has developed such a system, but Carrasco doesn’t think it will be widely adopted for at least a year.

“This will always be a problem,” he said. I don’t think people understand how bad this is. “

__

Associated Press writers Jonathan J. Cooper in Phoenix and Barbara Ortutay contributed to this report.

Spread the love
Exit mobile version