Each month, our experts at Cyabra list some of the interesting articles, items, essays and stories they’ve read this month. Come back every month for the current misinformation, disinformation, and social threat intelligence news.
“Looking for bad grammar and incorrect spelling is a thing of the past — even pre-ChatGPT phishing emails have been getting more sophisticated,” says Conal Gallagher, CIO and CISO at IT management firm Flexera. “We must ask: ‘Is the email expected? Is the from address legit? Is the email enticing you to click on a link?’ Security awareness training still has a place to play here.”
“Analysts at NewsGuard, an online trust-rating platform for news, recently tested out the tool and found it produced false information on command when asked about sensitive political topics.”
“Although external malicious actors receive most of the media attention, insider threats – stemming both from negligence and from malicious intent – are on the rise. According to the Ponemon Institute’s 2022 Cost of Insider Threats Global Report, 67 percent of companies experience 21 to 40 insider-related incidents per year—up from 60 percent in 2020 – with each incident incurring an average cost of $484,931. Insider threats are notoriously difficult to eradicate: It takes victim organisations an average of 85 days to contain an insider-related incident.”
“The parallels between cybersecurity and cognitive security extend beyond these foundational concepts. For example, malware is malicious code that corrupts machine behavior to deliver effects, such as ransomware encrypting files to extort victims. Similarly, narratives can include forms of disinformation that distort human understanding to influence behavior in line with a threat actor’s intentions.”
Today’s phishers don’t just phish via email, they phish via social media, phone, Whatsapp, SMS and Zoom, and they even leverage tools like ChatGPT to draft convincing phishing messages free from grammatical errors and spelling mistakes. What’s worse, phishers are advancing their social engineering capabilities at a time when organizations are still developing their hybrid work policies.
“The glitch exposed the payment details of about 1.2% of ChatGPT Plus users, including their email addresses, payment addresses, and the last four digits of their credit card numbers.”
“It can be difficult to think of responses to common anti-transgender disinformation in the moment. Responding directly to anti-trans talking points may also provide attention and validation to the disinformation, rather than correction. Evidence shows that “prebunking,” or inoculating people to disinformation before they hear it from a disreputable source, can be highly effective in preventing the spread of dis- and misinformation.”