Introducing botbusters.ai: detect AI-generated texts, images, and fake profiles. Click here

Misinformation Monthly – April 2023

Each month, our experts at Cyabra list some of the interesting articles, items, essays and stories they’ve read this month. Come back every month for the current misinformation, disinformation, and social threat intelligence news.

 

7 guidelines for identifying and mitigating AI-enabled phishing campaigns

“Looking for bad grammar and incorrect spelling is a thing of the past — even pre-ChatGPT phishing emails have been getting more sophisticated,” says Conal Gallagher, CIO and CISO at IT management firm Flexera. “We must ask: ‘Is the email expected? Is the from address legit? Is the email enticing you to click on a link?’ Security awareness training still has a place to play here.”

Csoonline.com 

 

Could ChatGPT supercharge false narratives?

“Analysts at NewsGuard, an online trust-rating platform for news, recently tested out the tool and found it produced false information on command when asked about sensitive political topics.”

www.poynter.org 

 

Insider threats are on the rise – again

“Although external malicious actors receive most of the media attention, insider threats – stemming both from negligence and from malicious intent – are on the rise. According to the Ponemon Institute’s 2022 Cost of Insider Threats Global Report, 67 percent of companies experience 21 to 40 insider-related incidents per year—up from 60 percent in 2020 – with each incident incurring an average cost of $484,931. Insider threats are notoriously difficult to eradicate: It takes victim organisations an average of 85 days to contain an insider-related incident.”

Professionalsecurity.co.uk 

 

Applying infosecurity principles and practices to cognitive security

“The parallels between cybersecurity and cognitive security extend beyond these foundational concepts. For example, malware is malicious code that corrupts machine behavior to deliver effects, such as ransomware encrypting files to extort victims. Similarly, narratives can include forms of disinformation that distort human understanding to influence behavior in line with a threat actor’s intentions.”

https://www.infosecurity-magazine.com 

 

Building a Defense-in-Depth Culture to Combat Phishing

Today’s phishers don’t just phish via email, they phish via social media, phone, Whatsapp, SMS and Zoom, and they even leverage tools like ChatGPT to draft convincing phishing messages free from grammatical errors and spelling mistakes. What’s worse, phishers are advancing their social engineering capabilities at a time when organizations are still developing their hybrid work policies.

www.corporatecomplianceinsights.com

 

OpenAI: Sorry, ChatGPT Bug Leaked Payment Info to Other Users

“The glitch exposed the payment details of about 1.2% of ChatGPT Plus users, including their email addresses, payment addresses, and the last four digits of their credit card numbers.”

www.pcmag.com 

 

Combating Anti-Transgender Disinformation

“It can be difficult to think of responses to common anti-transgender disinformation in the moment. Responding directly to anti-trans talking points may also provide attention and validation to the disinformation, rather than correction. Evidence shows that “prebunking,” or inoculating people to disinformation before they hear it from a disreputable source, can be highly effective in preventing the spread of dis- and misinformation.”

https://politicalresearch.org/