By Yossef Daar, co-founder and CPO of Cyabra
For years now, social media platforms have been plagued by fake profiles, foreign interference, information warfare, influence operations, attacks on governments and public institutions, and countless other attempts to manipulate conversations and undermine election integrity. Many of these sophisticated social engineering attacks have even crossed into the private sector, ensnaring major corporations such as Netflix, Coca-Cola, Intel, and others, causing severe financial and reputational damage.
The year 2025, which started with President Donald Trump’s second inauguration, has already seen an unprecedented rise in disinformation. Following one of the most turbulent, chaotic, and confusing presidential elections in history, society’s trust in public institutions is more fragile than ever.
This article aims to study the tactics employed by foreign state actors, particularly from Russia, China, and Iran. It delves into their strategies for shaping public opinion, offering valuable insights into their influence, methods and impact. By raising awareness of online manipulation, this piece equips readers with the knowledge to safeguard themselves and their communities from digital threats.
Who Are the State Actors Involved in Online Manipulation?
In recent years, creating fake campaigns and bot networks has come dangerously close to becoming a legitimate marketing strategy. A campaign manager who wishes to boost engagement around their brand quickly and doesn’t mind if the engagement is artificial can easily find bot networks for hire.
However, sophisticated fake networks come in many forms, and some are far more destructive than others. While using fake profiles for advertising has become almost common practice, manipulating public discourse on social media is a different story entirely.
The three most prominent state actors involved in online manipulations originate from Russia, China, and Iran.
These actors have been active for years and consistently succeed in swaying public opinion on core issues. The key characteristics they share include the substantial funds they allocate to foreign interference, the scale (volume) and persistence (length) of their fake campaigns, and, most importantly, the fact that their operations predominantly target other countries – particularly Western democracies.

A network of fake profiles originating in China that was behind a massive fake campaign to delegitimize Taiwan’s right to independence, following a visit by a U.S. state representative.
GenAI: A Weapon of Mass Disinformation
Some of the most common aspects attributed to these three major state actors involve their practiced use of AI-generated content in their efforts:
- Text: Bad actors integrate sophisticated GenAI engines into their bot networks to create unique, authentic-looking text. This helps with regular posting, building the illusion of an established profile rather than a newly created one, and interacting with authentic profiles that are unaware they’re engaging with a bot. Fake profiles also interact with other bots to amplify their reach, sometimes even orchestrating arguments between two sides.
- Images: Bad actors use AI-generated visuals to craft and spread false narratives, depicting fabricated war zones, flood damage, imprisoned politicians, and more. GenAI images are also used to create credible profile pictures and fill a fake profile’s timeline with visuals of leisure activities, vacations, and even favorite sports teams.
- Deepfake Videos: Bad actors use this sophisticated technique to convincingly replicate real individuals, allowing them to impersonate state leaders (presidents and PMs), celebrities, and other influential figures. A well-timed deepfake could influence elections, manipulate financial markets, damage the reputations of individuals or brands, or even incite violence.
- Supporting Content: Using GenAI, these well-planned bad actors invest significant resources into generated websites, news outlets, blogs, and other sources that bolster their claims, creating an illusion of reliable references and credible citations

A GenAI edit of Trump smiling after his assassination attempt, that was used both to praise him and to claim the attempt was staged.

A deepfake video of Ukrainian President Volodymyr Zelenskyy, presenting him supposedly telling the Ukrainian nation that he had decided to surrender to Russia.
A Well-Oiled Election Disinformation Machine
Election conversations on social media have always attracted state actors, who see them as prime opportunities to influence public opinion. While fake profiles typically make up 5% to 12% of conversations on social media, during elections – any elections, in any country – this number can rise significantly, sometimes reaching 30%, 40%, or even 50% fake profiles in election-related discussions.
An effective foreign influence campaign aimed at impacting elections starts years in advance. Patient state actors gradually create fake profiles in a careful trickle, keeping some dormant until it’s time to launch the campaign, while others are kept active to create the illusion of authentic accounts that, over time, gain traction and build trust among authentic profiles, creating a slow but steady ripple effect. The result is a network of bots nearly indistinguishable from genuine communities, often overlooked by social media monitoring teams.
Election influence is a long game. While state actors may favor one candidate over another, their true goal is to sow doubt, confusion, anger, and mistrust in public institutions. Their success is assured either way: weakened trust in society’s foundations. This is evident in how fake news and conspiracy theories that circulated years earlier resurface whenever a related topic arises. By the time these narratives reappear, they’re often spread by real people who unknowingly propagate disinformation. Once a false narrative spreads through social media discourse, it gains online immortality. Even years later, debunked conspiracies and fake news continue to appear in discussions, embedded as part of the narrative.
Originating in Russia, a network of fake profiles worked to discredit political figures supporting Ukraine during the Russia-Ukraine war.
What Do State Actors Want?
The conflict between Western democracies and non-democratic states has never truly ceased, even if it’s no longer fought with cannons and bombs. Call it a cultural war, a battle for global influence, or propaganda – in the end, spreading disinformation isn’t the ultimate goal for state actors. It’s a means to an end, part of a larger struggle for dominance, where controlling the narrative is just one piece of the game. Attacking one candidate, supporting another, targeting a Fortune 500 company, or promoting a divisive influencer – as long as the pot is stirred, the conflict continues. Of course, spreading disinformation is a lot cheaper than moving an aircraft carrier, which is why the disinformation war wages on.
Tracking and Monitoring Is Key
Around the world, governments and public organizations are beginning to understand, respect, and fear the power of social media platforms in shaping public opinion – and the ability of state actors to manipulate these forces. Learning to identify, expose, and prevent these bad actors from influencing our perceptions and our lives is crucial to restoring trust in society.
In this article, I discussed the methods shared by three major state actors that manipulate social discourse. In future articles, I will explore the differences among them and explain the evolving characteristics of disinformation campaigns originating from Russia, China, and Iran. I’ll also cover how to detect these attacks and identify fake profiles online. In the meantime, stay wary of those who may be trying to manipulate you. Ask yourself: can you be sure that the person you’re speaking with is real? And if not, what might they want?

An analysis of a bot network, consisting of 1,914 fake profiles, that gained a potential reach of 19 million views, as well as 20,000 engagements.
***
Yossef Daar
Yossef Daar is the co-founder and CPO of Cyabra. Yossef holds a significant role in shaping Cyabra’s vision, strategy, and evolution. His work involves leading product development and innovation in Cyabra’s AI-driven tools, ensuring the platform addresses disinformation challenges effectively, and aligning the product with the needs of businesses and government agencies. In the past, Yossef served for 13 years as the head of information warfare in the IDF’s special operations department.