Sunday, March 1, 2026

What we’ve been getting mistaken about AI’s reality disaster


On Thursday, I reported the primary affirmation that the US Division of Homeland Safety, which homes immigration companies, is utilizing AI video mills from Google and Adobe to make content material that it shares with the general public. The information comes as immigration companies have flooded social media with content material to help President Trump’s mass deportation agenda—a few of which seems to be made with AI (like a video about “Christmas after mass deportations”).

However I acquired two sorts of reactions from readers which will clarify simply as a lot concerning the epistemic disaster we’re in. 

One was from individuals who weren’t stunned, as a result of on January 22 the White Home had posted a digitally altered photograph of a girl arrested at an ICE protest, one which made her seem hysterical and in tears. Kaelan Dorr, the White Home’s deputy communications director, didn’t reply to questions on whether or not the White Home altered the photograph however wrote, “The memes will proceed.”

The second was from readers who noticed no level in reporting that DHS was utilizing AI to edit content material shared with the general public, as a result of information shops had been apparently doing the identical. They pointed to the truth that the information community MS Now (previously MSNBC) shared a picture of Alex Pretti that was AI-edited and appeared to make him look extra good-looking, a indisputable fact that led to many viral clips this week, together with one from Joe Rogan’s podcast. Struggle fireplace with fireplace, in different phrases? A spokesperson for MS Now advised Snopes that the information outlet aired the picture with out figuring out it was edited.

There isn’t any cause to break down these two circumstances of altered content material into the identical class, or to learn them as proof that reality now not issues. One concerned the US authorities sharing a clearly altered photograph with the general public and declining to reply whether or not it was deliberately manipulated; the opposite concerned a information outlet airing a photograph it ought to have recognized was altered however taking some steps to reveal the error.

What these reactions reveal as a substitute is a flaw in how we had been collectively making ready for this second. Warnings concerning the AI reality disaster revolved round a core thesis: that not with the ability to inform what’s actual will destroy us, so we’d like instruments to independently confirm the reality. My two grim takeaways are that these instruments are failing, and that whereas vetting the reality stays important, it’s now not succesful by itself of manufacturing the societal belief we had been promised.

Related Articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest Articles