Misinformation Monthly – April 2024

Each month, our experts at Cyabra list some of the interesting articles, items, essays, and stories they’ve read this month. Come back every month for the current misinformation, disinformation, and social threat intelligence news.

Disinformation: Coming to a Business Near You

“Publicly traded companies, for example, lose about $39 billion annually due to disinformation-related stock market losses, while $78 billion globally is lost each year.

False or misleading information can tarnish long-held brand reputation and consumer trust. Organizations with as few as four negative articles can experience losses of up to 70% of prospective customers. With social media largely replacing traditional advertising, organizations must be prepared to combat false narratives that can be easily and rapidly shared.”

nasdaq.com

Spate of Mock News Sites With Russian Ties Pop Up in U.S.

“Into the depleted field of journalism in America, a handful of websites have appeared in recent weeks with names suggesting a focus on news close to home: D.C. Weekly, the New York News Daily, the Chicago Chronicle and a newer sister publication, the Miami Chronicle.

In fact, they are not local news organizations at all. They are Russian creations, researchers and government officials say, meant to mimic actual news organizations to push Kremlin propaganda by interspersing it among an at-times odd mix of stories about crime, politics and culture.”

nytimes.com

Inside the Weird, Shady World of Click Farms

“Within a landscape of misinformation and disinformation, it’s easy to see how click farms could be used dangerously. “It’s also been used for nefarious reasons, [a member of] the BJP Party in India was found to have been buying fake comments on social media,” he alleges. “I don’t think a lot of people share things on a big scale intentionally to fool people, but people just get fooled by it, and I think content is thrown at you so quickly these days that it’s almost impossible to take notes of what you’re seeing.”

huckmag.com

We must target the root cause of misinformation. We cannot fact check our way out of this

“The impact of personalised disinformation is likely to be made worse in the future by developments in generative AI, which promises to deliver increasingly granular forms of customisation that until now had been impossible to achieve at scale.

If this all sounds like a disastrous mess, that’s because it is. Collective concerns such as the public interest, human rights and community responsibility can’t compete with the profit motive, and in practice are not prioritised by digital platforms.”

theguardian.com

Inside Google’s Plans to Combat Misinformation Ahead of the E.U. Elections

Concerns about AI-generated disinformation and the impact it stands to have on contests around the world continues to dominate this year’s election megacycle. This is particularly true in the E.U., which recently passed a new law compelling tech firms to increase their efforts to clamp down on disinformation amid concerns that an uptick in Russian propaganda could distort the results.

Contrary to what one might expect, prebunking ads aren’t overtly political nor do they make any allusions to any specific candidates or parties. In the video about decontextualization, for example, viewers are shown a hypothetical scenario in which an AI-generated video of a lion set loose on a town square is used to stoke fear and panic. In another video, this time about scapegoating, they are shown an incident in which a community lays sole blame on another group (in this case, tourists) for the litter in their parks without exploring other possible causes.

time.com

How Can We Tackle AI-Fueled Misinformation and Disinformation in Public Health?

“From hate speech to conspiracy theories, AI-fueled misinformation and disinformation serves to polarize society and create a harmful online environment. The World Economic Forum identified the threat from misinformation and disinformation as the most severe short-term threat facing the world today.

Generative AI is also being used to silence, intimidate, and humiliate women and girls, said Fleming. Online misogyny, including death threats, rape threats, and humiliated doctored photos, is increasingly being used as a weapon to shut down critics, such as leaders, politicians, activists, journalists, and other women. Fleming cited a 2022 UNESCO survey of female journalists around the world shows that 73% experienced online violence and 20% were attacked offline but felt there was a link.  AI generated images are easy to create and monetization of online misogyny an incentive.”

bu.edu

Related posts

Pro-Trump Bots Spare No One – Part 2

 In the first part of our election 2024 story, Cyabra has uncovered a large network of bots manipulating conversations about the elections, promoting pro-Trump agendas,...

Rotem Baruchin

April 30, 2023

Journalists, Watch Out for Impersonators

Did you hear about the Ticketmaster customer care impersonation? What about the 165 fake profiles impersonating Bank Negara? Other major companies like American Express and...

Rotem Baruchin

June 27, 2023

Hair Care Brands Tangled Up in Fake News

The “No Shampoo” trend, also known as the “No-Poo Movement”, has caught social media like wildfire. Despite repeated warnings from experts regarding this trend, which...

Rotem Baruchin

April 1, 2024