Blog - Cyabra https://cyabra.com/category/blog/ Sun, 09 Mar 2025 10:24:50 +0000 en-US hourly 1 https://wordpress.org/?v=6.7.2 https://cyabra.com/wp-content/uploads/2022/07/favicon-32x32-1.png Blog - Cyabra https://cyabra.com/category/blog/ 32 32 Fake Profiles Fueled “Economic Blackout Day” https://cyabra.com/blog/fake-profiles-fueled-economic-blackout-day/ Thu, 06 Mar 2025 13:45:08 +0000 https://cyabra.com/?p=15202 February 28's Economic Blackout Day started as an organic movement, but was amplified by fake profiles, and quickly transformed into an attack against major brands such as Walmart, Target, Best Buy, and Amazon.

The post Fake Profiles Fueled “Economic Blackout Day” appeared first on Cyabra.

]]>
The “Economic Blackout” initiative began trending on social media in early February. The activist group behind the movement declared Friday, February 28 a “No Spending” day, urging consumers to avoid shopping – especially from major corporations like Amazon, Walmart, Target, and Best Buy.

While the protest gained widespread promotion online, Cyabra’s latest research reveals that fake profiles played a significant role in amplifying and manipulating the trend.

Here’s the full story:

Why Do Fake Profiles Love Boycotts So Much?

Cyabra analyzed online conversations around Economic Blackout Day from February 14 to March 2 and uncovered 391 fake profiles on X that promoted the boycott using the hashtags #EconomicBlackout, #EconomicBoycott, and #EconomicBlackout2025

While the number of fake profiles in the discourse might not seem high, the content they spread gained a massive exposure of 5,328,000 views. Posts and comments by fake profiles were meticulously created, utilized the hashtags cleverly, and shared posters, images, and media coverage related to the protest, amplifying the backlash. 

Image: fake profiles promoting Economic Blackout Day, using hashtags, posters and articles. 

The fake profiles were also extremely successful in reaching the eyes of authentic profiles and spreading their messages through authentic communities, as can be seen in the picture below. 

Image: Network of interactions among profiles using the most popular hashtag, #EconomicBlackout, highlighting how fake profiles seamlessly integrated into authentic conversations. Green represents authentic profiles, while red indicates fake ones.

Why do fake profiles care about protests? The answer is simple – those who operate them have something to gain by joining these online conversations, influencing real users, and manipulating public sentiment.

Since GenAI surged in popularity in 2023, the use of social media bots for coordinated attacks and fake campaigns has skyrocketed. While disinformation efforts once focused mainly on elections and global events, the low cost of bot networks and the ease of creating and deploying them have made them a fixture in many social media trends – especially negative ones like boycotts and backlashes.

In the Economic Blackout conversations, some fake profiles promoted the protest, while others mocked it and encouraged people to spend more. Though seemingly on opposing sides, both efforts served the same purpose – creating polarization, amplifying negativity, and fueling anger and confusion. This is the most common strategy used by fake profiles and has become a core tactic in their campaigns.

Image: a fake profile using the economic blackout hashtag to mock the trend, amplifying the negative discourse. 

From Protest to Boycott: Brands Under Attack

As more accounts joined the discourse, the protest started shifting toward attacking industry giants. Many of these attacks strayed from the Economic Blackout theme, instead citing issues like canceled DEI policies, corporate greed, Black rights, and animal welfare.

Fake profiles seized this shift, latching onto new hashtags and narratives to spread consumer hate. Many directly targeted brands, using hashtags like #BoycottWalmart, #BoycottTarget, #BoycottAmazon, and #BoycottBestBuy.

Image: Fake profiles have shifted from Economic Blackout hashtags to other trending topics, continuing their attacks on the targeted companies.

This shift in the movement created an even greater brand reputation challenge for the targeted companies. Many brands try to manage these issues with traditional crisis tactics – unaware they are engaging with fake profiles, ironically amplifying the negative trend.

As seen in the examples throughout this story, fake profiles are now harder to detect – making it more difficult than ever for brands to protect their online reputation.

The Growing Risk of Brand Disinformation 

This crisis is not an isolated event – it is part of the growing phenomenon of “Brand Disinformation”, where fake profiles orchestrate coordinated attacks against companies. Many brands remain unaware that these crises are fueled by inauthentic accounts, leaving them unprepared to combat the wave of disinformation.

While “Economic Blackout Day” has ended, Cyabra identified multiple authentic accounts promoting ten upcoming “Boycott Days” as part of the same initiative, targeting major brands like Amazon, Nestlé, Walmart, Target, McDonald’s, and General Mills. Fake profiles will undoubtedly join these future campaigns, and their influence is expected to grow with the advancement of AI and automation tools.

Companies must be ready to face the next wave of brand disinformation – leveraging advanced monitoring tools to detect, analyze, and mitigate attacks in real time.

To learn more about protecting brands against online manipulation, reputational risks, and financial damage, contact Cyabra.

An authentic profile’s post about upcoming boycotts against brands in the next months.

Download the full analysis by Cyabra

The post Fake Profiles Fueled “Economic Blackout Day” appeared first on Cyabra.

]]>
How Foreign State Actors Threaten Democracies https://cyabra.com/blog/how-foreign-state-actors-threaten-democracies/ Mon, 03 Mar 2025 09:22:51 +0000 https://cyabra.com/?p=14953 Yossef Daar, Cyabra's co-founder and CPO, reveals how state actors like Russia, China, and Iran use AI-driven disinformation to manipulate public opinion and target democracies, offering insights to combat these digital threats.

The post How Foreign State Actors Threaten Democracies appeared first on Cyabra.

]]>

By Yossef Daar, co-founder and CPO of Cyabra

For years now, social media platforms have been plagued by fake profiles, foreign interference, information warfare, influence operations, attacks on governments and public institutions, and countless other attempts to manipulate conversations and undermine election integrity. Many of these sophisticated social engineering attacks have even crossed into the private sector, ensnaring major corporations such as Netflix, Coca-Cola, Intel, and others, causing severe financial and reputational damage.

The year 2025, which started with President Donald Trump’s second inauguration, has already seen an unprecedented rise in disinformation. Following one of the most turbulent, chaotic, and confusing presidential elections in history, society’s trust in public institutions is more fragile than ever. 

This article aims to study the tactics employed by foreign state actors, particularly from Russia, China, and Iran. It delves into their strategies for shaping public opinion, offering valuable insights into their influence, methods and impact. By raising awareness of online manipulation, this piece equips readers with the knowledge to safeguard themselves and their communities from digital threats.

Who Are the State Actors Involved in Online Manipulation? 

In recent years, creating fake campaigns and bot networks has come dangerously close to becoming a legitimate marketing strategy. A campaign manager who wishes to boost engagement around their brand quickly and doesn’t mind if the engagement is artificial can easily find bot networks for hire.

However, sophisticated fake networks come in many forms, and some are far more destructive than others. While using fake profiles for advertising has become almost common practice, manipulating public discourse on social media is a different story entirely.

The three most prominent state actors involved in online manipulations originate from Russia, China, and Iran. 

These actors have been active for years and consistently succeed in swaying public opinion on core issues. The key characteristics they share include the substantial funds they allocate to foreign interference, the scale (volume) and persistence (length) of their fake campaigns, and, most importantly, the fact that their operations predominantly target other countries – particularly Western democracies. 

A network of fake profiles originating in China that was behind a massive fake campaign to delegitimize Taiwan’s right to independence, following a visit by a U.S. state representative.

GenAI: A Weapon of Mass Disinformation

Some of the most common aspects attributed to these three major state actors involve their practiced use of AI-generated content in their efforts: 

  • Text: Bad actors integrate sophisticated GenAI engines into their bot networks to create unique, authentic-looking text. This helps with regular posting, building the illusion of an established profile rather than a newly created one, and interacting with authentic profiles that are unaware they’re engaging with a bot. Fake profiles also interact with other bots to amplify their reach, sometimes even orchestrating arguments between two sides.
  • Images: Bad actors use AI-generated visuals to craft and spread false narratives, depicting fabricated war zones, flood damage, imprisoned politicians, and more. GenAI images are also used to create credible profile pictures and fill a fake profile’s timeline with visuals of leisure activities, vacations, and even favorite sports teams.
  • Deepfake Videos: Bad actors use this sophisticated technique to convincingly replicate real individuals, allowing them to impersonate state leaders (presidents and PMs), celebrities, and other influential figures. A well-timed deepfake could influence elections, manipulate financial markets, damage the reputations of individuals or brands, or even incite violence.
  • Supporting Content: Using GenAI, these well-planned bad actors invest significant resources into generated websites, news outlets, blogs, and other sources that bolster their claims, creating an illusion of reliable references and credible citations

A GenAI edit of Trump smiling after his assassination attempt, that was used both to praise him and to claim the attempt was staged.

A deepfake video of Ukrainian President Volodymyr Zelenskyy, presenting him supposedly telling the Ukrainian nation that he had decided to surrender to Russia.

A Well-Oiled Election Disinformation Machine

Election conversations on social media have always attracted state actors, who see them as prime opportunities to influence public opinion. While fake profiles typically make up 5% to 12% of conversations on social media, during elections – any elections, in any country – this number can rise significantly, sometimes reaching 30%, 40%, or even 50% fake profiles in election-related discussions.

An effective foreign influence campaign aimed at impacting elections starts years in advance. Patient state actors gradually create fake profiles in a careful trickle, keeping some dormant until it’s time to launch the campaign, while others are kept active to create the illusion of authentic accounts that, over time, gain traction and build trust among authentic profiles, creating a slow but steady ripple effect. The result is a network of bots nearly indistinguishable from genuine communities, often overlooked by social media monitoring teams.

Election influence is a long game. While state actors may favor one candidate over another, their true goal is to sow doubt, confusion, anger, and mistrust in public institutions. Their success is assured either way: weakened trust in society’s foundations. This is evident in how fake news and conspiracy theories that circulated years earlier resurface whenever a related topic arises. By the time these narratives reappear, they’re often spread by real people who unknowingly propagate disinformation. Once a false narrative spreads through social media discourse, it gains online immortality. Even years later, debunked conspiracies and fake news continue to appear in discussions, embedded as part of the narrative.


Originating in Russia, a network of fake profiles worked to discredit political figures supporting Ukraine during the Russia-Ukraine war.

What Do State Actors Want? 

The conflict between Western democracies and non-democratic states has never truly ceased, even if it’s no longer fought with cannons and bombs. Call it a cultural war, a battle for global influence, or propaganda – in the end, spreading disinformation isn’t the ultimate goal for state actors. It’s a means to an end, part of a larger struggle for dominance, where controlling the narrative is just one piece of the game. Attacking one candidate, supporting another, targeting a Fortune 500 company, or promoting a divisive influencer – as long as the pot is stirred, the conflict continues. Of course, spreading disinformation is a lot cheaper than moving an aircraft carrier, which is why the disinformation war wages on.

Tracking and Monitoring Is Key 

Around the world, governments and public organizations are beginning to understand, respect, and fear the power of social media platforms in shaping public opinion – and the ability of state actors to manipulate these forces. Learning to identify, expose, and prevent these bad actors from influencing our perceptions and our lives is crucial to restoring trust in society.

In this article, I discussed the methods shared by three major state actors that manipulate social discourse. In future articles, I will explore the differences among them and explain the evolving characteristics of disinformation campaigns originating from Russia, China, and Iran. I’ll also cover how to detect these attacks and identify fake profiles online. In the meantime, stay wary of those who may be trying to manipulate you. Ask yourself: can you be sure that the person you’re speaking with is real? And if not, what might they want?

An analysis of a bot network, consisting of 1,914 fake profiles, that gained a potential reach of 19 million views, as well as 20,000 engagements. 

***

Yossef Daar

Yossef Daar is the co-founder and CPO of Cyabra. Yossef holds a significant role in shaping Cyabra’s vision, strategy, and evolution. His work involves leading product development and innovation in Cyabra’s AI-driven tools, ensuring the platform addresses disinformation challenges effectively, and aligning the product with the needs of businesses and government agencies. In the past, Yossef served for 13 years as the head of information warfare in the IDF’s special operations department.

The post How Foreign State Actors Threaten Democracies appeared first on Cyabra.

]]>
The Battle for Public Trust: Mike Pompeo on Cyabra’s Fight Against Disinformation  https://cyabra.com/blog/the-battle-for-public-trust-mike-pompeo-on-cyabras-fight-against-disinformation/ Thu, 27 Feb 2025 15:12:34 +0000 https://cyabra.com/?p=15191 Mike Pompeo, 70th Secretary of State, met with Cyabra to discuss the impact of disinformation on public trust and the fight against online manipulation.

The post The Battle for Public Trust: Mike Pompeo on Cyabra’s Fight Against Disinformation  appeared first on Cyabra.

]]>

“Disinformation has always been around, but it’s never been at the scale, or velocity, or magnitude that we all are experiencing in the world today.”

These words resonate deeply in the current landscape of misinformation and disinformation. Mike Pompeo, 70th Secretary of State, former CIA Director, and Cyabra’s board member, emphasizes the critical role that companies like Cyabra play in fortifying trust in the world’s major institutions against the onslaught of disinformation, further amplified by the harmful use of AI tools. “Cyabra has created a very rigorous methodology by which they are able to identify those risks on social media originating from artificial intelligence.”

Cyabra recently sat down with Secretary Pompeo to discuss the impact of disinformation on public trust, the growing challenges of detecting online manipulations, and the dual role of AI tools – on the one hand, weaponized to spread and disseminate disinformation, but on the other hand, harnessed by companies like Cyabra to help detect and combat fake campaigns, foreign influence, and other attempts to manipulate public opinion. 

AI And the Rising Threat of Disinformation 

Secretary Pompeo explains that “Disinformation is often not only a corollary to, but aimed directly at those very institutions in an effort to undermine their credibility.” This undermining can erode citizens’ trust in their governments, impacting democratic processes and the functioning of society as a whole. 

When trust in institutions wanes, it jeopardizes the fundamental operations of democracies. While addressing the need for trusted sources in the information space to counter those threats,  Pompeo asserts, “The risk to democratic nations is very real.” 

One of the major concerns raised by Pompeo is the exploitation of Generative AI and other AI tools for creating and spreading disinformation and fake news. “The quality of this disinformation is very, very high,” he says. “It is difficult for ordinary citizens to discern whether a message from an institution is real, or whether this is AI enabled or generated information, that is false and nefarious.” 

Pomepo explains that AI has also made creating and propagating disinformation extremely cheap, making it much more accessible for malicious actors.

The Role of Cyabra in Defending Democracy

In a world where mis- and disinformation spreads rapidly, it is crucial for governments to adopt effective strategies to counter these threats. Secretary Pompeo stresses that governments have a responsibility to protect their institutions from disinformation attacks, and notes that Cyabra, with its AI-powered platform, serves as a vital ally for governments and organizations in this fight.  

“Cyabra has demonstrated that it can quickly identify bots and disinformation, detect how quickly it’s propagating, and decide which problem to tackle first,” Pompeo explains, referring to Cyabra’s capabilities to detect mis- and disinformation and identify fake profiles and bot networks manipulating online discourse. 

Cyabra can combat disinformation at a scale, a speed and a cost that can truly counter disinformation – at the scale that AI has the capacity to generate and disseminate,” Pompeo continues. He stresses that Cyabra’s role extends beyond just protecting institutions, to protecting societies and democracy itself. The clarity that Cyabra provides is essential for a democratic society, in which informed citizens are able to discern fact from fiction and make informed decisions.

Cyabra’s Mission And the Future of Public Trust

“False information is often driven by state-nation actors with a deep agenda,” Secretary Pompeo warns. As the battle against disinformation continues to evolve, institutions must remain vigilant and adaptable, utilizing technology and fostering transparency to maintain public trust. Pompeo further underscores the critical importance of organizations like Cyabra in the fight against disinformation, explaining that as we navigate an increasingly complex information landscape, the need for trustworthy sources has never been more urgent. The stakes are high, and the health of our democracies depends on it. 

In conclusion, the impact of disinformation on public trust is profound, and it is a challenge that requires a collective effort from institutions, governments, and citizens alike. “Governments should know that it is incredibly important to protect their most vital institutions,” he adds, and concludes his words by reiterating that the capacity that Cyabra possesses to prevent disinformation from permeating public discourse is crucial as we work towards a future where public trust is restored and maintained. By equipping ourselves with those tools, we gain a significant advantage in the fight to protect democracy and uphold the values that define it. 

The post The Battle for Public Trust: Mike Pompeo on Cyabra’s Fight Against Disinformation  appeared first on Cyabra.

]]>
1,000+ Fake Accounts Disrupting German Elections https://cyabra.com/blog/1000-fake-accounts-disrupting-german-elections/ Thu, 20 Feb 2025 14:26:40 +0000 https://cyabra.com/?p=15170 Over 1,000 fake profiles artificially boosted support for the far-right party AfD, spreading hundreds of misleading posts, attacking political opponents, and amplifying pro-AfD narratives. 

The post 1,000+ Fake Accounts Disrupting German Elections appeared first on Cyabra.

]]>
Germany’s 2025 election is under attack. More than 1,000 fake accounts are manipulating political discourse, inflating support for the far-right AfD, and distorting public perception. Cyabra’s latest analysis uncovers a sophisticated disinformation operation designed to shape the outcome of the upcoming vote.

Disinformation and coordinated bot campaigns have long been a feature of election cycles, and this one is no exception. However, it also represents a significant evolution in the tactics of state actors seeking to influence voters, drive online discourse toward extremism and polarization, and erode trust in public institutions.

Here’s what Cyabra uncovered: 

Who Do the Bots Vote For? 

During January and February, Cyabra monitored social media discourse related to Germany’s elections, analyzing hashtags, keywords, communities, and the authenticity of the profiles participating in online conversations.

Cyabra’s analysis detected over 1,000 fake profiles artificially boosting support for the far-right party AfD (Alternative for Germany), infiltrating authentic social media conversations to spread hundreds of misleading posts, attack political opponents, and amplify pro-AfD narratives

Fake pro-AfD narratives were particularly pushed in conversations around the three major political parties (AfD, SPD, and Greens), where fake profiles created the illusion of widespread AfD support while drowning out real political debate. 

The Intricate Tactics of Election Interference

Cyabra’s research uncovered that 47% of the fake profiles have been active for over a year, suggesting a well-orchestrated, long-term influence operation campaign designed to manipulate German public perception. The rest of the fake profiles, created in the months preceding the elections, show the gradual rise of manipulation efforts as voting day draws near. 

In the picture: the “age” of fake profiles in German election discourse (based on creation date)

Across X, fake accounts were focused on three separate disinformation campaigns:

  • Alice Weidel and AfD – the co-chairwoman of AfD, Alice Weidel, had a high presence of bots interacting with her content, pushing positive and supportive messaging. 23% of her engagements originated from fake profiles. A trending post by Weidel, with a potential reach of 126 million views, was flooded with fake engagement – a third of all interactions (33%) were fake. 
  • The Greens – 15% of accounts discussing Germany’s Green Party were fake. The fake profiles promoted negative, anti-Green narratives, and amplified support for AfD.
  • Olaf Scholz and SPD – in conversations related to Germany’s chancellor and his party SPD (Social Democratic Party), 14% of the profiles were fake. An analysis of recent posts by Sholtz uncovered an even bigger 22% fake profiles interacting with his content, amplifying criticism while again, pushing support toward AfD.

While the three disinformation campaigns acted separately and with different networks of profiles, they all had the same goal: to promote support for AfD and discreetly target AfD’s opponents. 

The narratives used by fake profiles morphed, adapting seamlessly to the conversations they were participating in: In Weidel’s trending posts, fake profiles praised her, pushing the message of Weidel and AfD providing hope for a better future. With the Green Party, fake profiles attacked the party’s slogan, ‘Brandmauer’ (German for ‘firewall’, which stands against far-right extremism and the AfD), claiming that the Greens’ policies would destroy Germany’s future. The fake comments on chancellor Scholz’s posts were discrediting his leadership and SPD party, amplifying criticism. All three networks of bots frequently mentioned AfD. 

In the picture: Weidel’s posts, the network of fake profiles that integrated into authentic engagements (Green: real profiles. Red: fake profiles), and examples of fake profiles supporting Weidel in the comment section.  

How Can We Protect the Integrity of Election Discourse?

The narratives used by fake profiles in Germany’s election discussions followed classic disinformation tactics, commonly seen in past European elections: framing certain policies as threats and pushing similar messages across multiple discussions – both to promote their candidate and to discredit the opposition.

However, the tactics used by fake profiles this time were particularly sophisticated: By latching onto trending leaders’ posts, blending into authentic conversations, and coordinating positive and negative disinformation efforts, bots created the illusion of widespread support for their candidate – while simultaneously manufacturing massive criticism of their opponents. In doing so, they amplified polarization and fueled fear and anxiety about the future.

“Cyabra’s findings are a wake-up call: social media is being weaponized to manipulate the German election. The scale and coordination of these disinformation campaigns reveal a deliberate effort to shape public perception, sway undecided voters, and push a specific political agenda. With AI-driven disinformation manufacturing political narratives at scale, Germany’s election integrity is far from the last to be at major risk. Democracies and government organizations must act now to combat the growing threat that disinformation poses to election integrity.  

Learn more about Cyabra’s OSINT capabilities in monitoring election discourse, uncovering fake campaigns, and identifying bot networks. 

Want to see how Cyabra uncovers fake campaigns in real time? Download the full report.

The post 1,000+ Fake Accounts Disrupting German Elections appeared first on Cyabra.

]]>
State Farm’s Wildfire Fallout: Misinformation & Crisis Mis-management https://cyabra.com/blog/state-farms-wildfire-fallout-misinformation-crisis-mis-management/ Thu, 13 Feb 2025 11:23:18 +0000 https://cyabra.com/?p=15114 Misinformation claiming State Farm had prior knowledge of the LA fires caused huge damage to the company's reputation online.

The post State Farm’s Wildfire Fallout: Misinformation & Crisis Mis-management appeared first on Cyabra.

]]>
In the wake of the devastating wildfires that swept across Los Angeles, a parallel firestorm ignited on social media – this time targeting insurance companies. The biggest outrage was directed at State Farm, the largest property and auto insurer in the US. 

While insurance companies are no strangers to public resentment after major crises, State Farm’s online backlash was more than just consumer discontent. The insurance giant became the latest in a long line of corporations swept into a storm of misinformation and conspiracy theories, while its crisis management response was found lacking.

Here’s what Cyabra uncovered: 

From Wildfires to Online Firestorm 

State Farm was not the sole target of online criticism following the disastrous wildfires in LA. Throughout January, Cyabra’s analysis uncovered an overwhelmingly negative sentiment towards all major insurance companies in the US, with thousands of posts accusing companies of corporate greed and unethical pricing tactics. 

However, the backlash against State Farm was much higher than that of its competitors: Negative discourse around the company, both from authentic and fake accounts, reached a vast 1.1 million engagements and over 100 billion potential viewers. The crisis escalated further as customers highlighted the fact that State Farm had canceled 69% of its insurance policies just before the fires, with many sharing personal stories of losing coverage after years of loyalty.

However, this factual claim quickly morphed into a full-fledged conspiracy theory, and misinformation accusing the company of either causing the wildfires or having prior knowledge of the disaster started spreading rapidly, amassing 21.8 million views.

Authentic profiles sharing the conspiracy about State Farm’s “prior knowledge” of the wildfires reached 21.8 million potential views on X and Facebook. 

Viral Posts Fanning the Flames

Cyabra’s analysis also revealed that one of the key figures unintentionally amplifying misinformation about State Farm was actor James Woods (@RealJamesWoods). Woods shared his experience with State Farm’s policy cancellation. While his post on X expressed customer frustration and did not endorse the conspiracy, it played a significant role in spreading it. Garnering 2.4 million views, the post quickly became fertile ground for conspiracists to further push the “prior knowledge” narrative.

James Woods’ viral post criticizing State Farm and the conspiracy comments that were quick to follow. 

@unusual_whales, a widely followed finance account, posted a similar message that garnered 82,900 interactions and 13.3 million views. Once again, the comments section was overrun with fake news, misinformation, and conspiracy theories, as spreaders latched onto the post’s virality.

@unusual_whales’ post about State Farm’s policy cancellations.

State Farm’s Crisis Mis-management

As the crisis escalated, State Farm attempted to mitigate the fallout by having its agents post identical responses across their social media accounts, aiming to defuse tensions. However, Cyabra’s analysis revealed that this coordinated effort had little impact, gaining almost no exposure and only 52 overall engagements. The repetitive messaging came across as artificial, staged, and poorly executed. 

Cyabra’s analysis uncovered the identical response that was posted from different accounts of State Farm agents.

Early Warnings and Smarter Crisis Management

State Farm’s crisis was the result of several converging factors: the wildfires, last-minute policy cancellations, the company’s delayed and haphazard response, and a political climate in which fake news, misinformation, and disinformation – already on the rise in recent years – now spread more easily than ever.

However, this crisis also highlights how quickly false narratives can spiral online, inflicting serious damage on a brand’s reputation, and underscores the critical need to detect them early.

For corporations facing viral crises, it is essential to:

  • Proactively monitor discourse, sentiment, and narratives in real time to detect emerging threats before they escalate, and assess their impact.
  • Identify key amplifiers driving the conversation – whether customers, influencers, or coordinated fake profiles – and analyze their role in spreading harmful narratives.
  • Respond swiftly and strategically with messaging that is authentic, organic, adaptable, and personal

It is also crucial to remember that crisis management and response are no longer just a matter of damage control – it’s all about preparation, agility, and strategy. Only with robust monitoring and analysis tools can companies effectively mitigate crises, make informed decisions, and fully understand the scope of a rising issue or false narrative spreading across the digital landscape.

To learn more, contact Cyabra.

Download the full analysis by Cyabra

The post State Farm’s Wildfire Fallout: Misinformation & Crisis Mis-management appeared first on Cyabra.

]]>
“Understanding What is Real and What is Fake is Critical”: Cyabra’s Impact at Golin https://cyabra.com/blog/understanding-what-is-real-and-what-is-fake-is-critical-cyabras-impact-at-golin/ Tue, 11 Feb 2025 10:33:13 +0000 https://cyabra.com/?p=15034 Jonny Bentwood, Global President of Data and Analytics at PR agency Golin, explains how Cyabra helps protect clients’ reputations.

The post “Understanding What is Real and What is Fake is Critical”: Cyabra’s Impact at Golin appeared first on Cyabra.

]]>
In today’s digital landscape, the line between reality and fabrication has blurred significantly. This shift has tilted the advantage toward those spreading mis- and disinformation, making it harder for brands to protect their reputations. Jonny Bentwood, the Global President of Data and Analytics at international PR agency Golin, eloquently articulates the challenges brands face in combating harmful fake narratives, and explains how Cyabra enhances Golin’s capabilities to protect clients’ reputations. 

The Rising Threat of False Narratives

“The real problem we have nowadays is that it becomes so easy to create fake content,” Jonny points out. He continues to explain that this proliferation of false narratives can severely damage a brand’s reputation, often leading to crises that are entirely unfounded. With consumers as well as potential customers spending more time online than ever before, and receiving most of their information from unofficial channels, brands must navigate a complex environment flooded with mis- and disinformation.

“There have been so many situations where brands are caught up in a situation where misinformation is damaging their reputation – but it’s not really happening,” says Jonny, explaining how fake profiles can amplify crises, making them appear much worse than they are. For Jonny and Golin, the evident conclusion was that this reality necessitates robust solutions to combat these challenges.

Cyabra: A Game-Changer for Golin


Golin’s partnership with Cyabra has transformed the agency’s approach to data analytics and crisis management. Jonny states, “Now that we’ve got Cyabra as part of our data stack, we have unlocked opportunities that we weren’t able to do before.” Cyabra’s AI-powered platform acts as an early warning system, enabling Golin to stay ahead of potential threats before they escalate.

Jonny highlights one of Cyabra’s standout features: the ability to identify fake narratives and uncover the authors behind them, determine whether they are real or fake, and adjust response strategy accordingly. “Chopping the head off the snake right at the beginning because we know the author is the most crucial part of this,” Jonny says, and follows up with several examples of fake profiles spreading and amplifying disinformation during crises.  Referencing Cyabra’s authenticity detection and its ability to confidently identify real and fake profiles in online conversations, Jonny explains that this proactive approach allows Golin to mitigate the impact of misinformation effectively.

The Importance of Early Detection

Jonny stresses the significance of early detection in the fight against misinformation: “We’re never going to be able to stop fake content being out there; it’s whack-a-mole.” The focus, he explains, should be on recognizing when misinformation reaches a tipping point, at which point it can start spreading through legitimate channels. With Cyabra, Golin has gained not only the ability to detect false narratives, but also the confidence to address them decisively. 

“Being able to say ‘this is fake and this is real’ is not something that was in our capabilities before,” Jonny says. The partnership with Cyabra has provided Golin with a much-needed arsenal to combat the rising tide of disinformation effectively. As brands continue to grapple with the challenges posed by fake narratives, having a reliable ally like Cyabra is invaluable.

Authenticity & the Future of Brand Reputation

As Jonny Bentwood aptly summarizes, “If we’re going to protect the reputation of a brand, making sure that we understand what is fake and what is real is critical.” The integration of Cyabra into Golin’s operations is not just a technological upgrade; it represents a commitment to safeguarding brand integrity in an increasingly hostile digital environment. With Cyabra at their side, Golin is well-equipped to navigate these challenges and protect their clients’ reputations with confidence.

__________

Golin is a global PR agency with over 1700 employees across more than 50 offices worldwide. At the beginning of 2024, Golin partnered with Cyabra and has since regularly used Cyabra’s solutions to help protect Golin’s clients against online attacks and safeguard their reputation online. Follow Golin and Jonny Bentwood on LinkedIn. 

The post “Understanding What is Real and What is Fake is Critical”: Cyabra’s Impact at Golin appeared first on Cyabra.

]]>
DeepSeek Hype Fueled by Fake Profiles https://cyabra.com/blog/deepseek-hype-fueled-by-fake-profiles/ Thu, 06 Feb 2025 18:32:00 +0000 https://cyabra.com/?p=15006 Cyabra’s latest investigation revealed that much of the hype around DeepSeek isn’t organic: it's promoted by fake profiles.

The post DeepSeek Hype Fueled by Fake Profiles appeared first on Cyabra.

]]>
DeepSeek, a new AI developed by a Chinese startup, topped app download charts, triggered a trillion-dollar market loss in the US, and has been a source of inescapable online hype that seems to grow bigger and bigger with time. 

However, Cyabra’s latest research reveals that much of this excitement isn’t organic. In fact, it’s part of a coordinated campaign powered by fake profiles. Furthermore, those coordinated fake profiles exhibit behavior that is usually attributed to Chinese bot networks. 

Coordinated disinformation campaigns led by foreign state actors have multiplied in recent years. With the rise of AI tools, they became part of an evolving playbook used to influence public trust, markets, and even global policymaking.

Here’s the full story: 

The Tactics of DeepSeek’s Digital Cheerleaders

With its rising influence, particularly on the US stock market, DeepSeek became the subject of significant online engagement the moment it was launched. 

Between January 21 and February 4, Cyabra conducted a large-scale analysis of 41,864 profiles discussing DeepSeek-related content across major social platforms. 3,388 profiles were identified as fake. Most of them were active on X, where fake profiles accounted for 15% of engagement – double the typical rate on social media. 

The inauthentic accounts promoting DeepSeek were not operating independently: in fact, it was a coordinated network working in sync, actively pushing positive narratives to amplify DeepSeek’s hype, creating the illusion of widespread excitement and adoption. Through thousands of posts, comments, and shares, those fake profiles had a massive impact on social discourse. On February 3, the day of peak activity, fake profiles generated 2,158 posts in a single day.

The fake profiles employed two primary tactics: 

1. Amplifying each other by interacting within the network to create the appearance of broad, positive engagement. 

2. Integrating into authentic conversations, interacting with genuine users who were unaware they were engaging with bots 

In the picture: The two methods employed by fake profiles: Interacting with other fake profiles Vs. integrating into authentic conversations. 

Another tactic fake profiles employed to maximize exposure was engaging with high-visibility posts from authentic profiles. For example, one widely viewed post by @FanTV_official, which amassed over 480,000 views, was flooded with coordinated DeepSeek promotions. By inserting comments into already-popular discussions, the fake profiles increased credibility and ensured their content reached a broader audience. This tactic – piggybacking on trending posts to amplify fake engagement – has become an emerging strategy in online influence campaigns.

The coordinated profiles primarily posted in English and claimed to be based in the US and Europe. However, their synchronized activity suggested they originated from a single source. Cyabra also detected frequent mentions of China as the origin of DeepSeek, seemingly intended to attribute credit and foster positive sentiment towards China itself. 

As the artificial positive sentiment grew, various networks of fake profiles began exploiting the #DeepSeek hashtag for their own purposes. One network used the hype to promote scams, encouraging users to purchase tokens, while another leveraged the buzz to promote PublicAI, a competitor to DeepSeek, by citing a recent security breach on the platform.

In the picture: A second and third campaigns of fake profiles used #DeepSeek hype to push competitors and promote scams. 

The Anatomy of a Fake Profile

Fake profiles in the discourse exhibited clear telltale signs of a coordinated bot network:

  • Avatar recycling: Multiple fake profiles used the same profile pictures, often generic images of Chinese women.
  • Recent creation dates: 44.7% of these fake accounts were created in 2024, aligning with DeepSeek’s rise.
  • Synchronized posting: Fake accounts posted simultaneously to maximize visibility.
  • Identical content: Many accounts copy-pasted identical praise-filled comments.

These characteristics are consistent with the typical behavior of Chinese bot networks. By acting as a coordinated front, these accounts created an illusion of authenticity and virality, while much of the enthusiasm was, in reality, artificially engineered.

In the picture: Fake profiles in the DeepSeek discourse. Notice the identical posts (top right).

Artificial Hype, Real Risks

DeepSeek isn’t just the story of a new and exciting AI model. It’s a story of influence: of shaping public perception through a meticulously designed, premeditated influence operation. Malicious actors are exploiting real-world events, preying on heightened emotions, and weaponizing social media to fuel anger, instill fear, and deepen societal divisions. The same disinformation tactics once used to sway elections and incite protests are now being deployed to shape the AI arms race.

In this case, the ability to distinguish organic enthusiasm from manufactured hype is more critical than ever – but it’s only half the story. As the influence of fake profiles and disinformation tactics employed by state actors continues to grow – becoming not only more common but also harder to detect – the need for tools to combat these coordinated campaigns is more urgent than ever. Identifying the fake actors behind these campaigns, analyzing their behavior, and detecting the fake content they spread has become crucial to protect public discourse, trust, and perception. 

To learn more about comprehensive solutions to detect and combat coordinated fake campaigns, contact Cyabra.

The post DeepSeek Hype Fueled by Fake Profiles appeared first on Cyabra.

]]>
Cyabra Introduces “Insights”: Turning Complex Data Into AI-Driven Actionable Insights https://cyabra.com/blog/cyabra-introduces-insights-turning-complex-data-into-ai-driven-actionable-insights/ Sat, 01 Feb 2025 22:30:07 +0000 https://cyabra.com/?p=14755 Cyabra’s Insights empowers brands and government organizations to detect and understand online threats in real-time.

The post Cyabra Introduces “Insights”: Turning Complex Data Into AI-Driven Actionable Insights appeared first on Cyabra.

]]>
Cyabra’s Insights empowers brands and government organizations to detect and understand online threats in real-time, delivering actionable insights that once required the expertise of an entire team of analysts. Here’s how Cyabra makes it happen:

Too Much Information? 

In 2024, nearly every brand you know dedicates time, resources, and specialized roles to monitoring, analyzing, and understanding social media. Marketing teams, brand managers and strategists, crisis management experts, PR agencies, market researchers, customer insights managers, and growth officers – all these professionals rely on social media data and analysis on a weekly, daily, or even hourly basis.

Online attacks against brands have become increasingly frequent in recent years, causing massive financial and reputational damage. In response, monitoring and analysis tools have evolved, now offering an abundance of data: from sentiment to the age and location of those involved, from uncovering dominant narratives to identifying fake profiles spreading disinformation and manipulating social discourse.

While analysts and data scientists thrive on this precise, detailed, real-time information, the sheer volume of data can be overwhelming for most of us. Decoding complex data has become a time-consuming part of our daily work life.

This is where Cyabra’s Insights step in.

Cyabra’s “Insights” in action: detecting fake profiles manipulating the conversation

Navigating the Online Data Maze

Insights takes the overwhelming amount of data gathered by Cyabra’s AI, which continuously monitors and analyzes online conversations and news sites, and breaks it down into easy-to-understand answers and visuals.

With Insights, brands can uncover accessible, actionable results, understand key takeaways, and most importantly, spend less time on analysis and research, freeing up time to use the uncovered data more effectively.

Insights’ Essential Features include:

  1. Clear, Actionable Visuals: Insights reveals patterns, trends and key metrics, including sentiment, engagement, communities, influencers, geographic and demographic data, hashtags, and peak activity – all while sifting the real from the fake, providing a clear view of the authenticity of conversations.
  2. User-Friendly Q&A Format: Insights supplies answers to critical questions in seconds – sometimes, even questions you didn’t know you needed the answer to! Insights enables Cyabra’s clients and partners to make informed, confident decision-making, eliminating guesswork and allowing them to focus on the bottom line.
  3. Automated Disinformation Detection: Insights instantly identifies bots, fake profiles, deepfakes, manipulated GenAI content, toxic narratives, rising crises, harmful trends, and any other threats to brand reputation. 
Cyabra’s “Insights” in action: detecting the most viral narrative 

The Bottom Line in One Short Line 

Insights’ intuitive visuals and automated Q&As are designed around the most common queries and needs of Cyabra’s diverse clients across both private and public sectors. Insights help brands and governments to instantly uncover harmful narratives, detect fake accounts, and analyze how false content spreads – saving time and resources and supporting swift responses during critical moments, all without requiring technical expertise.

As we head into 2025, following the largest election year and a record year for disinformation, Cyabra is launching Insights at a pivotal moment. False narratives, fake accounts, and AI-generated content are spreading faster than ever, costing businesses and governments billions annually while eroding public trust and reputations. False news stories are 70% more likely to be shared than true ones, and experts predict that in the coming year, disinformation will become the top challenge for public and private sectors worldwide. With disinformation spiking during high-stakes events like elections, the need for rapid data analysis and response tools like Insights has never been greater.

“Clients often ask, ‘What’s next?’ when confronting disinformation,” said Yossef Daar, CPO of Cyabra. “Insights takes the guesswork out of the analysis, giving users a straightforward, visual way to see where false narratives are spreading, who’s behind them, and what’s driving engagement. This enables them to respond to digital threats faster and more effectively.”

Cyabra’s “Insights” in action: detecting the most viral narrative 

“Every second matters when countering disinformation,” said Dan Brahmy, CEO of Cyabra. “Insights turns vast amounts of data into clear, actionable knowledge, empowering our clients to uncover the real story behind the data and respond before the damage is done. It’s like having an expert analyst at your fingertips.”

During beta testing, Insights enabled:

  • A Fortune 500 company to neutralize reputational damage in minutes after detecting a disinformation spike about its CEO.
  • A government agency to uncover and disrupt hashtags fueling disinformation campaigns, enabling quicker interventions.

Insights is now available on Cyabra’s platform. To learn more about Insights and to see it in action, contact Cyabra

The post Cyabra Introduces “Insights”: Turning Complex Data Into AI-Driven Actionable Insights appeared first on Cyabra.

]]>
2024 Brand Crisis Round-Up – Part 1 https://cyabra.com/blog/2024-brand-crisis-round-up-part-1/ Tue, 31 Dec 2024 14:38:39 +0000 https://cyabra.com/?p=14900 A recap of some of the most significant disinformation-fueled attacks and boycotts against major companies in 2024 - and what we can learn from them: 

The post 2024 Brand Crisis Round-Up – Part 1 appeared first on Cyabra.

]]>
Many marketing executives, PR agencies, and social media managers will remember 2024 as the year of fear and confusion in their field: The year they woke up at 4 AM to thousands of negative mentions and couldn’t explain how or why this harmful trend started. The year classic crisis management tactics that have proven reliable for years suddenly failed spectacularly. The year in which companies found themselves under a storm of fake news, mis- and disinformation, as fake profiles escalated and amplified boycotts and online attacks. 

In 2023, GenAI entered our lives and changed them irrevocably. In 2024, GenAI became one of the most popular tools for bad actors seeking to cause reputational and financial harm to major companies. 2024 was also the biggest election year in history, with over 2 billion people across the globe casting their votes, which caused many brands to get swept into a storm of fake profiles enhancing negative hashtags for completely unrelated purposes. 

Cyabra has been closely monitoring how disinformation and fake profiles polarized and shaped the narrative in 2024. Here’s a roundup of some of the most significant disinformation-fueled attacks and boycotts of the year – and what we can learn from them: 

December: Nestle & the Milk Conspiracy

1 in 4 Profiles Spreading #BoycottNestle is Fake

Following misinformation regarding cattle feed additive Bovaer, major brands selling milk products such as Arla Foods, Nestle, and even Tesco and Aldi, were swept into the storm of fake news and conspiracy theories. Despite the fact that Bovaer was widely tested and declared safe by many global health organizations, the false narrative continued to spread, further amplified by fake profiles. Nestle, the latest brand to get swept into this destructive trend last December, was hit by a huge presence of fake profiles, which consisted of 26% of the overall profiles in online conversations. Using hashtags like #BoycottNestle and #BoycottBovaer, fake profiles latched onto viral posts by authentic influencers that were attacking Nestle, and magnified the crisis, bringing the negative hashtags to millions of views. Nestle is still dealing with the reputational damage to its brand to this day.

November: Walmart Boycotted Over Cultural Insensitivity

46% of the Profiles Pushing Negative Hashtags Were Fake

At the end of November, Walmart came out with a new clothing and underwear line featuring the Hindu deity Lord Ganesha. The company was quickly criticized for mockery of Hinduism and for general religious disrespect. After a few days of rising backlash, Walmart removed the campaign and apologized, but the boycott continued to trend. At the peak of the trend, 46% of all online content surrounding Walmart was created by fake profiles. Those posts were overwhelmingly negative, and used hashtags such as #BoycottWalmart, #CancelWalmart, and #RespectHinduSentiment to amplify their messages. 

November: Jaguar Is the Latest Victim of #GoWokeGoBroke

Fake Profiles Using #Faguar Pushed the Trend to Millions of Eyes

The backlash against luxury car brand Jaguar started when the company released a new controversial campaign, which brought on a tidal wave of criticism for what authentic profiles called “promoting woke aesthetic over luxury and performance”. As the hashtags #BoycottJaguar, #GoWokeGoBroke began trending, accompanied by the derogatory #Faguar, fake profiles infiltrated the conversation and amplified the negative sentiment, creating the illusion of widespread discontent. 20% of the profiles that promoted #BoycottJaguar and #Faguar were fake, and generated thousands of posts that received almost half a million views, while also amplifying negative articles to reach millions more. 
Read more about Jaguar’s crisis

August: A Single Misplaced Product Sparks Coca-Cola Boycott

Coca-Cola Unwittingly Dragged Into Olympic Controversy

The boycott started when the president of the Olympic committee appeared in the media during the games to address an issue of a female boxer supposedly given an unfair advantage. General negative sentiment against the boxer surged after the announcement. Coca-Cola actually had nothing to do with the decision or the boxer – a Coca-Cola bottle was simply resting on the table while the president made the announcement. Coca-Cola was just one of the many sponsors of the Olympic games (which also included Intel, Samsung, Visa, etc), but the company was the only one to get dragged into the backlash. 20% of the profiles in the negative discourse against Coca-Cola were fake, and utilized #boycottCocaCola and #boycottolympics2024 to attack the company. 

Continue to Part 2: 2024 Brand Crisis Round-Up

The post 2024 Brand Crisis Round-Up – Part 1 appeared first on Cyabra.

]]>
2024 Brand Crisis Round-Up – Part 2 https://cyabra.com/blog/2024-brand-crisis-round-up-part-2/ Tue, 31 Dec 2024 14:38:00 +0000 https://cyabra.com/?p=14915 A recap of some of the most significant disinformation-fueled attacks and boycotts against major companies in 2024 - and what we can learn from them: part 2

The post 2024 Brand Crisis Round-Up – Part 2 appeared first on Cyabra.

]]>
Cyabra has been closely monitoring how disinformation and fake profiles polarized and shaped the narrative in 2024. Here’s a roundup of some of the most significant disinformation-fueled attacks and boycotts of the year – and what we can learn from them. Make sure to also check out part 1 of this 2024 brand crisis round-up!

July: Netflix Swept Into Election Disinformation 

Elections Bots Already Active in Social Discourse Latched Onto the Trend

Netflix’s co-founder and former CEO donated $7 million to Harris’s presidential campaign, sparking a surge of posts calling to boycott and cancel Netflix – which unlike classic “cancel” calls, actually resulted in a dramatic spike in subscription cancellation for Netflix. 24% of the profiles that targeted Netflix on X were fake profiles that had already been active for a while, spreading election-related messages. Those fake profiles increased the boycott movement while pushing anti-Harris content, achieving a record-breaking 19.5 million views for their content alone, and helping to push #CancelNetflix to 309 million views in just one week.
Read more about Netflix’s crisis

May: Fake Influencers Drove Burger King Backlash 

A Few Complaints Escalated Into a Huge Negative Trend

Complaints about prices, taste, and overall discontentment that started at the end of May derailed quickly, amplified by fake profiles on X. Within one single day, negative content spiked, increasing by 192% compared to previous weeks. Fake profiles discussing the brand comprised over 39% of the profiles in the conversation, and their posts and replies reached over 44 million views across May and June. One fake account on X was crucial in amplifying the crisis, spreading the calls for a boycott when the negativity was just beginning to trend. 
Read more about Burger King’s crisis

April: “Steal From Loblaws Day” Hits Huge Chain

Fake Profiles Utilized #Greedflation to Attack Loblaws Supermarkets

Protest against Canadian supermarket giant Loblaws started as legitimate consumer-led resentment, following concerns about the cost of living and accusations of “greedflation”, but was picked up by fake profiles, that amplified and promoted the angry voices and used  #BoycottLoblaws and #CancelLoblaws to cause a huge crisis, which resulted in real-world damages. When the viral complaints radicalized and turned into “Steal From Loblaws Day” posters, hung on the streets of major Canadian cities, fake profiles continued to spread the posters, resulting in more viral photos of empty Loblaws parking lots. 
Read more about Loblaws’ crisis

March: Fake Profiles Pump Planet Fitness Boycott

#BoycottPlanetFitness Reached a Potential 200 Million Views

Planet Fitness faced mounting scrutiny and a major stock drop after canceling a woman’s membership for taking a picture of a transgender woman in the locker room. The very first use of the hashtag #BoycottPlanetFitness, on March 15, came from a fake profile. Other fake accounts shared and reposted controversial authentic influencers with millions of followers, who supported and promoted the boycott calls. Planet Fitness’ stock plummeted after the crisis and took over a month to recover. 
Read more about Planet Fitness’ crisis

January: Rip Curl Pulled Into the Vortex of Indian Election 

#BoycottRipCurl Was Massively Exploited by Indian Election Bots

International surf sportswear brand Rip Curl went under after featuring a transgender surfer in its latest fashion campaign. At the end of January, #BoycottRipCurl, #RipRipCurl and #SaveWomensSports reached over 220 million views across social media platforms, assisted by the contribution of fake posts, which accounted for 19% of the profiles in the discourse, and 30% of all content. But the worst was yet to come: after Rip Curl removed the campaign and the crisis started dying out, it was resurfaced by Indian election bots who picked up on the trend and used the hashtags as part of a coordinated fake campaign, bringing the damaging hashtag to an increase of 424% in one week. 
Read more about Rip Curl’s crisis

2025 Will be the Year of Brand Disinformation 

The lessons from 2024 are clear: traditional brand, crisis, and social media strategies are no longer enough. Major companies must learn to identify fake profiles, detect their impact, and deal with their manipulations. They must also take proactive measures to defend against the next issue or crisis, building resilience and staying ahead of these evolving threats. As the tactics of disinformation evolve, enhancing new, sophisticated AI tools, in 2025 brands must also arm their teams with equally advanced defenses to protect against reputational and financial risks. Contact Cyabra to learn more.

The post 2024 Brand Crisis Round-Up – Part 2 appeared first on Cyabra.

]]>