FeaturesTop Stories

True or False?

How to tell fact from fabrication online. Experts advise you to proceed with caution, especially with regard to AI, and check multiple sources

By Marie Sherlock

 

“This is Not Morgan Freeman”: One prominent deepfake video, created by a Dutch YouTube channel called “Diep Nep” in 2021, explicitly labels itself as a deepfake to demonstrate the capabilities of the technology and raise awareness about “synthetic reality.” Yes, despite the image and the voice, the actor was not Morgan Freeman.

Not long ago, a friend sent me a link to a YouTube video that she said gave a blow-by-blow account of a debate between two prominent individuals. I clicked on the link she sent, but as I watched I knew that something was off about it. After Googling around I concluded it was a complete fabrication.

Sadly, there were multiple YouTube videos “reporting” on the same debate which simply never took place.

The friend who sent this to me is well-educated and very bright. Yet she fell for the ruse.

I recently saw multiple ads online in which Oprah (apparently) shilled for a weight loss cure involving Himalayan pink salt. I had no idea that those were AI-generated until I read an article about the impacts of AI that flagged the Oprah ads as examples.

I had been taken in too.

My friend and I are in good company. Melissa Zimdars is a media studies scholar and chairwoman of the Department of Communication & Media at Merrimack College. Even though she’s an expert in the field, Zimdars says she was duped by a seemingly real video of a lawyer being arrested during an ICE raid.

That video also turned out to be an AI-generated sham.

 

Swimming Against a Tsunami of Falsehoods

It all started out innocently enough. In 1991, the very first website debuted. Two years later, there were only about 130 websites.

But by 2005, that number had jumped to 46.5 million registered domains. The count was up to 1.88 billion websites by 2023.

And that’s only part of the equation. On the flip side of all of that “information” are consumers.

Fueled by the advent of social media sites and the introduction of the smart phone — and then further jumpstarted by the pandemic — Americans now spend an average of seven hours daily online. (A 2024 Pew Research Center study revealed that 41% of Americans say that they’re online “almost constantly.”)

So we’re spending at least half our waking hours online sponging up material created by millions of largely unregulated, unvetted sources.

What could possibly go wrong?

 

‘A Growing Problem’

The sheer quantity of that data creates massive challenges for consumers. “Being able to assess whether information is true, false or somewhere in between is a growing problem,” says Zimdars.

Walter Scheirer, a professor with Notre Dame’s Department of Computer Science & Engineering, notes that there are two basic areas of concern: misinformation — getting the facts wrong unintentionally and disinformation—deliberately misstating facts.

The means by which misinformation and disinformation are spread online are myriad. They include taking information and quotes out of context, deep fakes, memes, astroturfing and straight-up falsehoods.

What do the experts find most concerning? Scheirer thinks memes are particularly troublesome. “The messages they carry, usually through humor, can be insidious, especially if that humor is racist, antisemitic, misogynistic or otherwise antisocial,” he says.

And then there’s artificial intelligence. It’s made those “deep fakes” much easier to create and much more realistic. “AI definitely lowers the bar for the generation of fake content,” says Scheirer.

Zimdars used to think that material taken out of context or presented in misleading ways was the biggest pitfall —”gray area” information that has some scintilla of truth. Now, she says “AI-generated images and videos of the future will make the 2016 fake news crisis [disinformation like Pope Francis endorsing Trump and Hillary Clinton selling weapons to ISIS] look quaint in comparison.”

Zimdars is also alarmed by who is spreading the disinformation. “Now I’m more concerned with entirely fabricated content because it’s being circulated by the most powerful people in the world.”

The problem is nearly existential in nature, according to Zimdars. “How can a society collectively make decisions if we’re no longer in agreement on what is true or false, or able to tell what is fake or real?” she asks.

 

Strategies for Discerning the Truth

“It is more important than ever to know how to evaluate information on several levels rather than just relying on what we think sounds believable,” says Brooke Becker, an associate professor at the University of Alabama at Birmingham Libraries.

Here are some methods for doing just that:

Fact-Checkers

“I love fact-checking sites,” says Becker. If you’re looking to verify online claims, Becker recommends first going to the International Fact Checking Network (IFCN) site which includes a list of “signatories” (currently numbering 158 organizations) that have been vetted and agree to abide by the IFCN’s code of principles.

But these vetted sites are limited by time constraints and the enormous quantity of mis and disinformation online. Even the most efficient fact-checkers cannot help consumers in real time. “That’s the most complicated element of misinformation. We all want to know all the things immediately and are loathe to wait even for a modicum of investigation or proofing,” notes Becker.

Do-It-Yourself Fact-Checking

Because fact-checking sites cannot by design answer all of your “true or false?” inquiries immediately, experts have devised several ways that consumers of news can evaluate information.

Lateral Reading

This method involves double or triple checking “news” across multiple independent sites — ones that you trust — instead of relying on one source. Essentially, says Zimdars, “you leave the page or platform where you encountered the information and look it up elsewhere.” If you find that multiple reputable news outlets are reporting the same thing, you can usually trust the message.

The CRAAP, SIFT and ROBOT Tests

An array of acronyms can guide you in your search for accuracy.

One DIY truth-sleuthing method is the CRAAP Test — and, yes, the librarian who created this approach in 2004 hoped that the acronym would be easy to remember because it helps individuals determine if information is “crap” or not. The steps include checking the currency, relevance, authority, accuracy and purpose of the material in question.

SIFT — which stands for stop; investigate the source; find better coverage; and trace claims, quotes and media to their original context — is a more recent analysis that Becker favors.

Another strategy that Becker likes is geared toward AI inquiries. The ROBOT test asks the consumer to look at reliability, objective, bias, ownership and type.

Put Down Your Smartphone

Finally, there’s this nearly sacreligious advice: Skip the internet altogether.

“One of the best strategies for avoiding being duped by false information on the internet is to not use the Internet to gather information in the first place,” says Scheirer. In particular, Scheirer cautions consumers to not look to social media for their news. “There are plenty of other sites that can be trusted because they have good reputations for honest reporting. For example, the websites of newspapers of record,” he adds.

To help you determine which news sites can be trusted, Scheirer adds this piece of wisdom: “In general, the slower and more professional the news medium, the more likely it is to be reliable. This is because professional journalists have to spend time checking the facts of a story before it runs.”

Patience Is a (Virtual) Virtue

“The slower, the better” is good advice for consumers too. All of the methods for discerning the truth online take time and effort, says Becker. “In any good evaluation scheme, one of the first things you’ll note is the call for patience,” she advises.

“Take a moment,” she cautions.

 

No Magic Bullet

Online fabrications and falsehoods are here to stay. “I’m honestly pretty pessimistic,” says Zimdars. She points to “low quality information and AI slop” on social media platforms, partisan government sites and the failure of new organizations “to meet the contemporary moment.”

“Things are looking pretty grim,” she concludes.

But Zimdars sees glimmers of hope. For one thing, she believes that improvements in AI are a double-edged sword: While they accelerate deep fakes, they could also result in tools that will help consumers detect manipulated content.

Zimdars advocates for government regulations and “platform responsibility.” “AI-generated content needs to be clearly labeled so people know what they’re seeing is not real,” she explains.

Until those things become reality, we can all do better: use vetted fact-checkers and trusted news sources; slow down and verify claims whenever possible.

And perhaps spend a bit less time online.


Marie Sherlock practiced law for a decade before turning to writing and editing in her 30s — and never looked back. She’s worked as the editor of several publications and is the author of a parenting book (“Living Simply with Children;” Three Rivers Press). She spends her empty-nest days writing about travel trends and destinations, simplicity, spirituality and social justice issues. The above article was first published online at nextavenue.org.