High-profile individuals in important roles across companies and organizations are frequently subject to risks emerging from the online world. You don’t have to be Elon Musk or Mark Zuckerberg to be a target: in the past year alone, Cyabra has uncovered online death threats towards the CEOs of Starbucks, Apple, and Disney, a website spreading the private emails and phone numbers of many other CEOs, and fake profiles impersonating a notable journalist. Overall, Cyabra has witnessed a steady increase in the number of attacks and threats against individuals online.
How to Detect Risks Online
When identifying and stopping risks at Cyabra, sophisticated AI tools and unique NLP models are being applied to a meticulous investigation that includes measuring authenticity, identifying fake profiles, and detecting fake or AI-generated content. Let’s delve into the systematic approach taken by Cyabra to identify, analyze, and neutralize potential threats:
Tracking a Threat
Threats to executives and other individuals are not born on their own: they’re usually the continuation of criticism, anger, or other negative emotions expressed towards the individual in question, or even at the company they work in.
The first step in Cyabra’s research is diving into every social media platform in search of negative mentions of the individual in question. It’s important to note that negative feelings by themselves aren’t necessarily a concern – people are allowed to express criticism. But when a critical mass of people hates a company, it’s a brand reputation issue, dealt with by the Marketing and PR teams (although, like with every rule, there are exceptions, such as in the recent case of Zara being targeted by fake profiles). However, when a similar number of people hate a person – whether it’s a board member, CEO, or even a store manager, hatred could result in an actual risk to this individual or to their loved ones.
Content Analysis: Detecting Negativity
Cyabra’s algorithm has the ability to identify posts, shares, comments, images, and videos containing harmful and negative content. The process of detecting harmful content involves a wide database of hostile words across hundreds of different languages (“hate”, “kill”, “murder” and many others) and even words that could potentially be connected to damaging narratives (for example, the word “family” might be associated with threats).
The next stage is separating negative content and criticism from violent threats to the executives as well as their families, and detecting leaked information that might raise concerns. Such information can include, for example, the executive’s home address or car number, their partner’s workplace address, their children’s school, and many others.
Identifying Harmful and Fake Profiles
At this stage, Cyabra flags the accounts that posted the threatening content for further investigations, analyzing activity and behavioral patterns. Using OSINT tools, a comprehensive profile of the threat actor is compiled. The profile is compared across different social media platforms, checked for a possibility of impersonation, and their posted content is studied carefully to detect any mention of private information about the executive in question.
Further analysis is then conducted to understand the individual’s attitude and intentions, discern violent patterns, and understand if they pose a threat to others and whether further action is needed. This can include a deep timeline scan, a study of the person’s interactions as well as online groups and communities the profile is part of, and of course, a wide authenticity scan.
Keep Your People Safe, Online and Offline
While not every threat in the online sphere manifests as a danger in the real world, being aware and alerted of threats on social media can make a huge difference when safeguarding your people. The ability to monitor social media, detect threats and indicators of compromise (IOCs), and stop bad actors before they can cause harm is crucial for CISOs and Corporate Security teams when protecting employees and executives.
By combining thorough investigation, content analysis, deep authenticity analysis, and collaboration with law enforcement, Cyabra provides the ability to identify potential dangers and stop them before they can cause harm.