Introducing botbusters.ai: detect AI-generated texts, images, and fake profiles. Click here

Misinformation Monthly – November 2023

Each month, our experts at Cyabra list some of the interesting articles, items, essays and stories they’ve read this month. Come back every month for the current misinformation, disinformation, and social threat intelligence news.

 

KPMG Lodges Complaint After AIGenerated Material Was Used to Implicate Them in Nonexistent Scandals

“A group of academics has apologised to the big four consultancy firms after their submission to an inquiry contained false allegations of serious wrongdoing, which were collated by an AI tool and not factchecked. One academic has claimed responsibility for the errors, generated by the Google Bard AI tool, which produced case studies that never occurred and cited them as examples of why structural reform was needed.”

www.theguardian.com

 

Fake News Didn’t Play a Big Role in NZ’s 2023 Election – But There Was a Rise in ‘Small Lies

“More than a third of all misleading posts in 2023 were emotional (37%), targeting voters’ emotions through words or pictures. Some 26% of the social media posts jumped to conclusions, while 23% oversimplified the topics being discussed. And 21% of the posts cherry-picked information, meaning the information presented was incomplete.”

theconversation.com

 

Misinformation in the Age of Artificial Intelligence and What it Means for the Markets

“In mid-October, bitcoin’s price briefly spiked by 5%, after a false Cointelegraph post on X that stated, “SEC approves iShares bitcoin spot ETF.” The tweet remained live for 30 minutes, before it was updated to add the word “reportedly.” It was later removed, and Cointelegraph apologized for what it said was “a tweet that led to the dissemination of inaccurate information.”

nasdaq.com

 

China Is Using the World’s Largest Known Online Disinformation Operation
to Harass Americansa CNN Review Finds

“Meta announced in August it had taken down a cluster of nearly 8,000 accounts attributed to this group in the second quarter of 2023 alone. Google, which owns YouTube, told CNN it had shut down more than 100,000 associated accounts in recent years, while X, formerly known as Twitter, has blocked hundreds of thousands of China “state-backed” or “state-linked” accounts, according to company blogs. Still, given the relatively low cost of such operations, experts who monitor disinformation warn the Chinese government will continue to use these tactics to try to bend online discussions closer to the CCP’s preferred narrative, which frequently entails trying to undermine the US and democratic values.”

cnn.com

 

Deepfakes Could Supercharge Health Care’s Misinformation Problem

“False images and audio that appear to come from a trusted source will make it harder to spread accurate health messages and will erode the public’s confidence in legitimate sources. Imagine the impact of a deepfake Anthony Fauci video telling people not to get vaccinated, for instance. AI could enable disinformation to be automated and disseminated at scale. “That’s the super-threat here,” said Heather Lane, senior architect of the data science team for Athenahealth.”

www.axios.com

 

Gen Z More Likely to Fall Prey to Online Scams Than Their Boomer Grandparents

“Often heard boasting about being raised on the internet, the younger generation is increasingly unsafe there, with the FBI reporting a 2,000 percent increase in losses due to scams affecting those under 20 years of age — jumping from an estimated $8.2 million in 2017 to $210 million in 2022. Born anywhere between the end of the 1990s and the early 2010s, Gen Z digital natives are said to be easy prey for bad actors, who take advantage of their love of social media and online shopping, MLive first reported.”

nypost.com

 

Is Argentina the First A.I. Election?

“Argentina’s election has quickly become a testing ground for A.I. in campaigns, with the two candidates and their supporters employing the technology to doctor existing images and videos and create others from scratch. A.I. has made candidates say things they did not, and put them in famous movies and memes. It has created campaign posters, and triggered debates over whether real videos are actually real.”

www.nytimes.com

 

Iran Conducting Influence Operations in the Biden Administration: Report

“Iran has been conducting covert influence operations for years in the U.S. and abroad as part of a concerted disinformation campaign that is suspected to be espionage, a congressional briefing and corresponding report revealed Tuesday. The 82-page report, titled “Iran: The Ayatollah’s Hidden Hand,” details how supreme leader Ali Khamenei and the Iranian regime use operatives in the Biden administration to influence U.S. policy involving the Islamic Republic, building on a Semafor article that was published in September. At the time, it was reported that at least three Iranian agents transitioned from soliciting Tehran’s talking points to working directly on policy under the purview of U.S. special representative for Iran.”

nationalreview.com