Written by Dan Brahmy, Cyabra’s CEO
When we founded Cyabra in 2018, governments were calling for a solution to identify fake profiles on social media, as well as tools to fight disinformation campaigns around elections, protests, and civil unrest.
Since then, the problem has grown and infiltrated the private sector. Whilst the need of countries, states, corporations, and companies to protect themselves against online threats has never been greater, the bad actors themselves have spread their range and their scope in recent years.
Rather than just targeting governments and the general public, purveyors of influence operations have started targeting the private sector across all industries. From large publicly traded enterprises to privately held companies, social media has become the new attack surface, and everyone’s fair game.
Whether it’s impersonation, social phishing, or fake campaigns, malicious actors are leveraging disinformation tactics previously used against governments on some of the largest and most well-known companies around the world.
Want to know how these social engineering campaigns are created and executed? That’s what we’re here for.
In the world of threat intelligence and information warfare, there are three key components behind the structure of every disinformation campaign or social engineering tactic. This methodology is known as the ABC: Actors, Behavior, and Content.
A: Who’s Behind Online Attacks?
Actors, sometimes referred to as malicious accounts, bad players, or fake profiles, are the entities behind the attacks.
Malicious actors can be bots, trolls, sock puppets, or avatars (if you’re not familiar with those terms, check out our blog post, Intro to Disinformation). While each of them represents a different entity being operated in a different way, the difference between them is in behavior.
B: What Are Those Actors Doing?
Behavior is the type of social engineering used. The two main types of fake accounts created by malicious actors, sock puppets and bots, are quite different. The first one is operated manually, while the second one is automated. The first one cost time, and the second – money.
Being operated manually, sock puppets easily engage in conversations, choosing their targets carefully instead of shooting in all directions. They’re as sharp as the mind of the real person behind them, and can pretend to be just as real as us.
Automated accounts are a different matter. They’ve been a part of our lives for years now – at the beginning, only as spam bots, flooding us with “Special offer! Just today!” messages, and are generally easier to spot.
Money can buy the services of bot farms, and if you haven’t got the budget, you can always invest time and manual labor in creating and operating fake accounts the old-fashioned way – one by one.
C: How Much of What We See Is Fake?
Content means the type of content created by fake profiles. A couple of years ago, bots were limited to just a few platforms, mostly the textual-based ones – Facebook and Twitter.
Nowadays, bots can be found on Instagram, TikTok and many other media-based platforms, where they spread their narrative, share and leverage the content of real and fake users as they “see fit”, and skew with public opinion. While we’re mostly thinking of fake profiles as negative content spreaders, Cyabra uncovered bots being used to repel a rising protest using positive content, or even to divert the conversation and be used as a distraction, “flooding” a hashtag or topic with irrelevant content.
This is just one aspect in which those bots are becoming smarter. Their AI, as ChatGPT users now learn, just keeps getting better at sounding and appearing human. In the past, manual social engineering seemed like the real threat. What could compete with real, malicious people hiding behind a fake identity? Bots tweet, post, share, reply, use hashtags and emojis, and overall create a wave of disinformation, diverting the discourse in any direction their operators see fit.
Another important aspect of fake content is its ability to adapt quickly and take advantage of new technologies. Disinformation spreads and expands on several fronts at the same time, and while we view visual and audio content as more reliable than written content, the first types of content are actually becoming increasingly easy to falsify. As ChatGPT and other advanced AI help create false personas and content, Deepfakes are becoming more realistic than ever, and are misleading even the most suspecting people.
Is Your Company Safe From Online Threats?
Technology is always advancing. Creating bots, an act that once required some heavy hacking skills and a dark hoodie, has become so easy that it won’t be long before we can teach our grandparents how to do it. It literally takes seconds to create millions of fake accounts, and there’s also no limit to the number of bots a malicious actor can create.
Don’t ignore this problem. It’s not going away anytime soon – actually, it just keeps growing. If you’re a CISO, CEO, or responsible for risk and threat intelligence in your organization, make sure your company is aware of the risks and can identify the bad actors and the content they’re spreading. If you need help, contact Cyabra. We provide accurate, cross-platform, multi-language, real-time social threat intelligence, with the ability to identify, uncover and respond to any online threat. Set up a demo with our experts now.