Three Examples of GenAI Used for Disinformation

AI-generated texts and images unintentionally offer great value and great opportunities for bad actors online. With one single click, they can create millions of texts, images, videos (Deepfake and others) that could help them promote and leverage any agenda. 

This blog post will show three different times GenAI has been used to spread disinformation and fake news, and to sow confusion, mistrust and anger.

 

1. The Fake Pentagon Explosion: Stock Market Manipulation

Currently, this is probably the most famous use of a GenAI image, used not only as a means to impact discourse on social media, but also to sabotage the stock market. On May 22, 2023, this image, depicting smoke supposedly rising near the Pentagon, started circulating on X (then Twitter), shared first by a fake profile that claimed to be a “media and news organization” and had the blue checkmark. 

Thousands of real profiles shared this post until it was finally detected as fake and taken down. By then, it was too late: S&P 500 dropped by 30 points, and while it later recovered, this case has shown how easy it is for online disinformation to spread into the real world and affect it. 

Note that this wasn’t a coordinated disinformation campaign amplified by bot networks. It was one single bad actor, who was able to use one single fake account to achieve virality and cause maximum chaos. The virality of the post was arguably assisted by X’s former policy of allowing anyone to buy a verified account, which both helped the fake profile look legitimate, and caused the algorithm to favor it in terms of virality.  

Fake GenAI image showing an explosion in the Pentagon
Fake GenAI image showing an explosion in the Pentagon

 

2. Trump Arrested: Mistrust in Leaders

Ironically, the famous images of Trump being arrested were actually not created by a threat actor. Eliot Higgins, founder of an investigative journalism group called Bellingcat, created the photo using the image generator Midjourney to mock Trump (as he did in the past with Putin). Higgins did not expect the image to trend and go viral as much as it did, which was even more ironic, since Bellingcat, the journalist group he’s founded, actually uses social media posts and other digital data to prove facts and uncover crimes. 

Higgins posted the images on March 20, and within the next few days, the fake images were shared 7,500 times, liked 39,000 times, and reached 6.7 million views. While Higgins rushed to apologize, Russian media and other foreign news channels were quick to share this story as a fact, highlighting the ease and simplicity of discrediting present and past leaders in the eyes of the world.

While those photos could be identified as fake just by taking a deeper look, keep in mind that those less-than-perfect images were also created over six months ago – a lifetime for GenAI. Nowadays, generated images already look much better, clearer and more accurate (for example, check out this fake image of the pope). And even when they were very clearly fake, most people didn’t bother with a second look, and simply shared the “news”, creating a PR crisis and a reputational problem for the US government. CNN also mentioned that those images “fed Trump’s narrative of persecution, a visual manifestation of the drama he puts into his posts.”

Trump arrest: would you have believed it?

 

3. Natural Disasters: Panic-Inducing Disinformation

Remember the fake alligator photo that was trending on social media over a year ago, during Hurricane Ian in Florida? That image, which was shared over 84,000 times, was actually not even created by GenAI, because it precedes it: the fake alligator – or in fact, fake crocodile – has been around since 2010. This simple photoshopped image wreaked havoc on social media, adding up to many other posts sharing misinformation and disinformation during hurricane Ian. 

While fake and photoshopped photos have been around for ages, when GenAI imagery came into our lives it brought a massive wave of fake “natural disasters”, like an earthquake and tsunami on the western coast of North America in 2001, that destroyed houses and roads in Seattle, Portland, and also parts of Canada. 

This natural disaster never actually happened, despite many, many fake GenAI images claiming otherwise

This method of using GenAI images to induce panic and hysteria online became so common that even educational institutions like the Pacific Tsunami Museum now have a website section addressing fake Tsunami images that went viral, and explaining why they are actually fake.

It’s also important to note that this technique is not limited to “natural disasters” – it is used in wars as well.

 

The North American earthquake that never happened.
The North American earthquake that never happened.

 

The Future of GenAI Imagery

AI images generators continue to improve. Last year, a fake photo created by GenAI won the Sony World Photography award, proving again that it’s impossible to distinguish real from fake – at least not with our eyes. We can expect to see many more of those images used for manipulation on social media in years to come, as well as much more confusion and mistrust regarding real images and videos. 

This is the third part in Cyabra’s GenAI series, explaining why the new technology that came into our lives in the past year makes the work of bad actors so much easier. This blog series delves into the risks of GenAI, and explains why it’s crucial that the private and public sectors make use of advanced SOCMINT and OSINT tools to monitor social media for threats, detect GenAI content, and identify fake accounts.

If you’d like to learn more, contact Cyabra. As always, we recommend using our free-to-use tool, botbusters.ai, to detect GenAI text and images, and to identify fake profiles.

Related posts

Tracking Anti-Navalny Bot Armies

This story was written by Evan Solomon and was first posted on Gzero, based on Cyabra's research. _____ In an exclusive investigation into online disinformation...

Rotem Baruchin

February 28, 2024

Phone-y Business: Imposter CEO Risks T-Mobile

Let’s talk about impersonations online, and more importantly, about the connection between impersonations and brand reputation: in the past year, Cyabra has witnessed a steady...

Rotem Baruchin

January 15, 2024

Misinformation Monthly – July 2023

Each month, our experts at Cyabra list some of the interesting articles, items, essays and stories they’ve read this month. Come back every month for...

Rotem Baruchin

July 23, 2023