Cyabra Thu, 20 Feb 2025 15:16:02 +0000 en-US hourly 1 https://wordpress.org/?v=6.7.2 https://cyabra.com/wp-content/uploads/2022/07/favicon-32x32-1.png Cyabra 32 32 AiThority Interview with Dan Brahmy, CEO & Co-Founder of Cyabra https://aithority.com/machine-learning/aithority-interview-with-dan-brahmy-ceo-co-founder-of-cyabra/#new_tab Thu, 20 Feb 2025 15:16:02 +0000 https://cyabra.com/?p=15181 The post AiThority Interview with Dan Brahmy, CEO & Co-Founder of Cyabra appeared first on Cyabra.

]]>

The post AiThority Interview with Dan Brahmy, CEO & Co-Founder of Cyabra appeared first on Cyabra.

]]>
1,000+ Fake Accounts Disrupting German Elections https://cyabra.com/blog/1000-fake-accounts-disrupting-german-elections/ Thu, 20 Feb 2025 14:26:40 +0000 https://cyabra.com/?p=15170 Over 1,000 fake profiles artificially boosted support for the far-right party AfD, spreading hundreds of misleading posts, attacking political opponents, and amplifying pro-AfD narratives. 

The post 1,000+ Fake Accounts Disrupting German Elections appeared first on Cyabra.

]]>
Germany’s 2025 election is under attack. More than 1,000 fake accounts are manipulating political discourse, inflating support for the far-right AfD, and distorting public perception. Cyabra’s latest analysis uncovers a sophisticated disinformation operation designed to shape the outcome of the upcoming vote.

Disinformation and coordinated bot campaigns have long been a feature of election cycles, and this one is no exception. However, it also represents a significant evolution in the tactics of state actors seeking to influence voters, drive online discourse toward extremism and polarization, and erode trust in public institutions.

Here’s what Cyabra uncovered: 

Who Do the Bots Vote For? 

During January and February, Cyabra monitored social media discourse related to Germany’s elections, analyzing hashtags, keywords, communities, and the authenticity of the profiles participating in online conversations.

Cyabra’s analysis detected over 1,000 fake profiles artificially boosting support for the far-right party AfD (Alternative for Germany), infiltrating authentic social media conversations to spread hundreds of misleading posts, attack political opponents, and amplify pro-AfD narratives

Fake pro-AfD narratives were particularly pushed in conversations around the three major political parties (AfD, SPD, and Greens), where fake profiles created the illusion of widespread AfD support while drowning out real political debate. 

The Intricate Tactics of Election Interference

Cyabra’s research uncovered that 47% of the fake profiles have been active for over a year, suggesting a well-orchestrated, long-term influence operation campaign designed to manipulate German public perception. The rest of the fake profiles, created in the months preceding the elections, show the gradual rise of manipulation efforts as voting day draws near. 

In the picture: the “age” of fake profiles in German election discourse (based on creation date)

Across X, fake accounts were focused on three separate disinformation campaigns:

  • Alice Weidel and AfD – the co-chairwoman of AfD, Alice Weidel, had a high presence of bots interacting with her content, pushing positive and supportive messaging. 23% of her engagements originated from fake profiles. A trending post by Weidel, with a potential reach of 126 million views, was flooded with fake engagement – a third of all interactions (33%) were fake. 
  • The Greens – 15% of accounts discussing Germany’s Green Party were fake. The fake profiles promoted negative, anti-Green narratives, and amplified support for AfD.
  • Olaf Scholz and SPD – in conversations related to Germany’s chancellor and his party SPD (Social Democratic Party), 14% of the profiles were fake. An analysis of recent posts by Sholtz uncovered an even bigger 22% fake profiles interacting with his content, amplifying criticism while again, pushing support toward AfD.

While the three disinformation campaigns acted separately and with different networks of profiles, they all had the same goal: to promote support for AfD and discreetly target AfD’s opponents. 

The narratives used by fake profiles morphed, adapting seamlessly to the conversations they were participating in: In Weidel’s trending posts, fake profiles praised her, pushing the message of Weidel and AfD providing hope for a better future. With the Green Party, fake profiles attacked the party’s slogan, ‘Brandmauer’ (German for ‘firewall’, which stands against far-right extremism and the AfD), claiming that the Greens’ policies would destroy Germany’s future. The fake comments on chancellor Scholz’s posts were discrediting his leadership and SPD party, amplifying criticism. All three networks of bots frequently mentioned AfD. 

In the picture: Weidel’s posts, the network of fake profiles that integrated into authentic engagements (Green: real profiles. Red: fake profiles), and examples of fake profiles supporting Weidel in the comment section.  

How Can We Protect the Integrity of Election Discourse?

The narratives used by fake profiles in Germany’s election discussions followed classic disinformation tactics, commonly seen in past European elections: framing certain policies as threats and pushing similar messages across multiple discussions – both to promote their candidate and to discredit the opposition.

However, the tactics used by fake profiles this time were particularly sophisticated: By latching onto trending leaders’ posts, blending into authentic conversations, and coordinating positive and negative disinformation efforts, bots created the illusion of widespread support for their candidate – while simultaneously manufacturing massive criticism of their opponents. In doing so, they amplified polarization and fueled fear and anxiety about the future.

“Cyabra’s findings are a wake-up call: social media is being weaponized to manipulate the German election. The scale and coordination of these disinformation campaigns reveal a deliberate effort to shape public perception, sway undecided voters, and push a specific political agenda. With AI-driven disinformation manufacturing political narratives at scale, Germany’s election integrity is far from the last to be at major risk. Democracies and government organizations must act now to combat the growing threat that disinformation poses to election integrity.  

Learn more about Cyabra’s OSINT capabilities in monitoring election discourse, uncovering fake campaigns, and identifying bot networks. 

Want to see how Cyabra uncovers fake campaigns in real time? Download the full report.

The post 1,000+ Fake Accounts Disrupting German Elections appeared first on Cyabra.

]]>
State Farm’s Wildfire Fallout: Misinformation & Crisis Mis-management https://cyabra.com/blog/state-farms-wildfire-fallout-misinformation-crisis-mis-management/ Thu, 13 Feb 2025 11:23:18 +0000 https://cyabra.com/?p=15114 Misinformation claiming State Farm had prior knowledge of the LA fires caused huge damage to the company's reputation online.

The post State Farm’s Wildfire Fallout: Misinformation & Crisis Mis-management appeared first on Cyabra.

]]>
In the wake of the devastating wildfires that swept across Los Angeles, a parallel firestorm ignited on social media – this time targeting insurance companies. The biggest outrage was directed at State Farm, the largest property and auto insurer in the US. 

While insurance companies are no strangers to public resentment after major crises, State Farm’s online backlash was more than just consumer discontent. The insurance giant became the latest in a long line of corporations swept into a storm of misinformation and conspiracy theories, while its crisis management response was found lacking.

Here’s what Cyabra uncovered: 

From Wildfires to Online Firestorm 

State Farm was not the sole target of online criticism following the disastrous wildfires in LA. Throughout January, Cyabra’s analysis uncovered an overwhelmingly negative sentiment towards all major insurance companies in the US, with thousands of posts accusing companies of corporate greed and unethical pricing tactics. 

However, the backlash against State Farm was much higher than that of its competitors: Negative discourse around the company, both from authentic and fake accounts, reached a vast 1.1 million engagements and over 100 billion potential viewers. The crisis escalated further as customers highlighted the fact that State Farm had canceled 69% of its insurance policies just before the fires, with many sharing personal stories of losing coverage after years of loyalty.

However, this factual claim quickly morphed into a full-fledged conspiracy theory, and misinformation accusing the company of either causing the wildfires or having prior knowledge of the disaster started spreading rapidly, amassing 21.8 million views.

Authentic profiles sharing the conspiracy about State Farm’s “prior knowledge” of the wildfires reached 21.8 million potential views on X and Facebook. 

Viral Posts Fanning the Flames

Cyabra’s analysis also revealed that one of the key figures unintentionally amplifying misinformation about State Farm was actor James Woods (@RealJamesWoods). Woods shared his experience with State Farm’s policy cancellation. While his post on X expressed customer frustration and did not endorse the conspiracy, it played a significant role in spreading it. Garnering 2.4 million views, the post quickly became fertile ground for conspiracists to further push the “prior knowledge” narrative.

James Woods’ viral post criticizing State Farm and the conspiracy comments that were quick to follow. 

@unusual_whales, a widely followed finance account, posted a similar message that garnered 82,900 interactions and 13.3 million views. Once again, the comments section was overrun with fake news, misinformation, and conspiracy theories, as spreaders latched onto the post’s virality.

@unusual_whales’ post about State Farm’s policy cancellations.

State Farm’s Crisis Mis-management

As the crisis escalated, State Farm attempted to mitigate the fallout by having its agents post identical responses across their social media accounts, aiming to defuse tensions. However, Cyabra’s analysis revealed that this coordinated effort had little impact, gaining almost no exposure and only 52 overall engagements. The repetitive messaging came across as artificial, staged, and poorly executed. 

Cyabra’s analysis uncovered the identical response that was posted from different accounts of State Farm agents.

Early Warnings and Smarter Crisis Management

State Farm’s crisis was the result of several converging factors: the wildfires, last-minute policy cancellations, the company’s delayed and haphazard response, and a political climate in which fake news, misinformation, and disinformation – already on the rise in recent years – now spread more easily than ever.

However, this crisis also highlights how quickly false narratives can spiral online, inflicting serious damage on a brand’s reputation, and underscores the critical need to detect them early.

For corporations facing viral crises, it is essential to:

  • Proactively monitor discourse, sentiment, and narratives in real time to detect emerging threats before they escalate, and assess their impact.
  • Identify key amplifiers driving the conversation – whether customers, influencers, or coordinated fake profiles – and analyze their role in spreading harmful narratives.
  • Respond swiftly and strategically with messaging that is authentic, organic, adaptable, and personal

It is also crucial to remember that crisis management and response are no longer just a matter of damage control – it’s all about preparation, agility, and strategy. Only with robust monitoring and analysis tools can companies effectively mitigate crises, make informed decisions, and fully understand the scope of a rising issue or false narrative spreading across the digital landscape.

To learn more, contact Cyabra.

Download the full analysis by Cyabra

The post State Farm’s Wildfire Fallout: Misinformation & Crisis Mis-management appeared first on Cyabra.

]]>
“Understanding What is Real and What is Fake is Critical”: Cyabra’s Impact at Golin https://cyabra.com/blog/understanding-what-is-real-and-what-is-fake-is-critical-cyabras-impact-at-golin/ Tue, 11 Feb 2025 10:33:13 +0000 https://cyabra.com/?p=15034 Jonny Bentwood, Global President of Data and Analytics at PR agency Golin, explains how Cyabra helps protect clients’ reputations.

The post “Understanding What is Real and What is Fake is Critical”: Cyabra’s Impact at Golin appeared first on Cyabra.

]]>
In today’s digital landscape, the line between reality and fabrication has blurred significantly. This shift has tilted the advantage toward those spreading mis- and disinformation, making it harder for brands to protect their reputations. Jonny Bentwood, the Global President of Data and Analytics at international PR agency Golin, eloquently articulates the challenges brands face in combating harmful fake narratives, and explains how Cyabra enhances Golin’s capabilities to protect clients’ reputations. 

The Rising Threat of False Narratives

“The real problem we have nowadays is that it becomes so easy to create fake content,” Jonny points out. He continues to explain that this proliferation of false narratives can severely damage a brand’s reputation, often leading to crises that are entirely unfounded. With consumers as well as potential customers spending more time online than ever before, and receiving most of their information from unofficial channels, brands must navigate a complex environment flooded with mis- and disinformation.

“There have been so many situations where brands are caught up in a situation where misinformation is damaging their reputation – but it’s not really happening,” says Jonny, explaining how fake profiles can amplify crises, making them appear much worse than they are. For Jonny and Golin, the evident conclusion was that this reality necessitates robust solutions to combat these challenges.

Cyabra: A Game-Changer for Golin


Golin’s partnership with Cyabra has transformed the agency’s approach to data analytics and crisis management. Jonny states, “Now that we’ve got Cyabra as part of our data stack, we have unlocked opportunities that we weren’t able to do before.” Cyabra’s AI-powered platform acts as an early warning system, enabling Golin to stay ahead of potential threats before they escalate.

Jonny highlights one of Cyabra’s standout features: the ability to identify fake narratives and uncover the authors behind them, determine whether they are real or fake, and adjust response strategy accordingly. “Chopping the head off the snake right at the beginning because we know the author is the most crucial part of this,” Jonny says, and follows up with several examples of fake profiles spreading and amplifying disinformation during crises.  Referencing Cyabra’s authenticity detection and its ability to confidently identify real and fake profiles in online conversations, Jonny explains that this proactive approach allows Golin to mitigate the impact of misinformation effectively.

The Importance of Early Detection

Jonny stresses the significance of early detection in the fight against misinformation: “We’re never going to be able to stop fake content being out there; it’s whack-a-mole.” The focus, he explains, should be on recognizing when misinformation reaches a tipping point, at which point it can start spreading through legitimate channels. With Cyabra, Golin has gained not only the ability to detect false narratives, but also the confidence to address them decisively. 

“Being able to say ‘this is fake and this is real’ is not something that was in our capabilities before,” Jonny says. The partnership with Cyabra has provided Golin with a much-needed arsenal to combat the rising tide of disinformation effectively. As brands continue to grapple with the challenges posed by fake narratives, having a reliable ally like Cyabra is invaluable.

Authenticity & the Future of Brand Reputation

As Jonny Bentwood aptly summarizes, “If we’re going to protect the reputation of a brand, making sure that we understand what is fake and what is real is critical.” The integration of Cyabra into Golin’s operations is not just a technological upgrade; it represents a commitment to safeguarding brand integrity in an increasingly hostile digital environment. With Cyabra at their side, Golin is well-equipped to navigate these challenges and protect their clients’ reputations with confidence.

__________

Golin is a global PR agency with over 1700 employees across more than 50 offices worldwide. At the beginning of 2024, Golin partnered with Cyabra and has since regularly used Cyabra’s solutions to help protect Golin’s clients against online attacks and safeguard their reputation online. Follow Golin and Jonny Bentwood on LinkedIn. 

The post “Understanding What is Real and What is Fake is Critical”: Cyabra’s Impact at Golin appeared first on Cyabra.

]]>
Sticking to Reality https://quillette.com/2025/02/11/sticking-to-reality-misinformation/#new_tab Tue, 11 Feb 2025 09:57:51 +0000 https://cyabra.com/?p=15031 The post Sticking to Reality appeared first on Cyabra.

]]>

The post Sticking to Reality appeared first on Cyabra.

]]>
Cyabra Insights protects against AI-driven digital disinformation https://www.helpnetsecurity.com/2025/02/06/cyabra-insights/#new_tab Sun, 09 Feb 2025 16:45:08 +0000 https://cyabra.com/?p=15026 The post Cyabra Insights protects against AI-driven digital disinformation appeared first on Cyabra.

]]>

The post Cyabra Insights protects against AI-driven digital disinformation appeared first on Cyabra.

]]>
DeepSeek Hype Fueled by Fake Profiles https://cyabra.com/blog/deepseek-hype-fueled-by-fake-profiles/ Thu, 06 Feb 2025 18:32:00 +0000 https://cyabra.com/?p=15006 Cyabra’s latest investigation revealed that much of the hype around DeepSeek isn’t organic: it's promoted by fake profiles.

The post DeepSeek Hype Fueled by Fake Profiles appeared first on Cyabra.

]]>
DeepSeek, a new AI developed by a Chinese startup, topped app download charts, triggered a trillion-dollar market loss in the US, and has been a source of inescapable online hype that seems to grow bigger and bigger with time. 

However, Cyabra’s latest research reveals that much of this excitement isn’t organic. In fact, it’s part of a coordinated campaign powered by fake profiles. Furthermore, those coordinated fake profiles exhibit behavior that is usually attributed to Chinese bot networks. 

Coordinated disinformation campaigns led by foreign state actors have multiplied in recent years. With the rise of AI tools, they became part of an evolving playbook used to influence public trust, markets, and even global policymaking.

Here’s the full story: 

The Tactics of DeepSeek’s Digital Cheerleaders

With its rising influence, particularly on the US stock market, DeepSeek became the subject of significant online engagement the moment it was launched. 

Between January 21 and February 4, Cyabra conducted a large-scale analysis of 41,864 profiles discussing DeepSeek-related content across major social platforms. 3,388 profiles were identified as fake. Most of them were active on X, where fake profiles accounted for 15% of engagement – double the typical rate on social media. 

The inauthentic accounts promoting DeepSeek were not operating independently: in fact, it was a coordinated network working in sync, actively pushing positive narratives to amplify DeepSeek’s hype, creating the illusion of widespread excitement and adoption. Through thousands of posts, comments, and shares, those fake profiles had a massive impact on social discourse. On February 3, the day of peak activity, fake profiles generated 2,158 posts in a single day.

The fake profiles employed two primary tactics: 

1. Amplifying each other by interacting within the network to create the appearance of broad, positive engagement. 

2. Integrating into authentic conversations, interacting with genuine users who were unaware they were engaging with bots 

In the picture: The two methods employed by fake profiles: Interacting with other fake profiles Vs. integrating into authentic conversations. 

Another tactic fake profiles employed to maximize exposure was engaging with high-visibility posts from authentic profiles. For example, one widely viewed post by @FanTV_official, which amassed over 480,000 views, was flooded with coordinated DeepSeek promotions. By inserting comments into already-popular discussions, the fake profiles increased credibility and ensured their content reached a broader audience. This tactic – piggybacking on trending posts to amplify fake engagement – has become an emerging strategy in online influence campaigns.

The coordinated profiles primarily posted in English and claimed to be based in the US and Europe. However, their synchronized activity suggested they originated from a single source. Cyabra also detected frequent mentions of China as the origin of DeepSeek, seemingly intended to attribute credit and foster positive sentiment towards China itself. 

As the artificial positive sentiment grew, various networks of fake profiles began exploiting the #DeepSeek hashtag for their own purposes. One network used the hype to promote scams, encouraging users to purchase tokens, while another leveraged the buzz to promote PublicAI, a competitor to DeepSeek, by citing a recent security breach on the platform.

In the picture: A second and third campaigns of fake profiles used #DeepSeek hype to push competitors and promote scams. 

The Anatomy of a Fake Profile

Fake profiles in the discourse exhibited clear telltale signs of a coordinated bot network:

  • Avatar recycling: Multiple fake profiles used the same profile pictures, often generic images of Chinese women.
  • Recent creation dates: 44.7% of these fake accounts were created in 2024, aligning with DeepSeek’s rise.
  • Synchronized posting: Fake accounts posted simultaneously to maximize visibility.
  • Identical content: Many accounts copy-pasted identical praise-filled comments.

These characteristics are consistent with the typical behavior of Chinese bot networks. By acting as a coordinated front, these accounts created an illusion of authenticity and virality, while much of the enthusiasm was, in reality, artificially engineered.

In the picture: Fake profiles in the DeepSeek discourse. Notice the identical posts (top right).

Artificial Hype, Real Risks

DeepSeek isn’t just the story of a new and exciting AI model. It’s a story of influence: of shaping public perception through a meticulously designed, premeditated influence operation. Malicious actors are exploiting real-world events, preying on heightened emotions, and weaponizing social media to fuel anger, instill fear, and deepen societal divisions. The same disinformation tactics once used to sway elections and incite protests are now being deployed to shape the AI arms race.

In this case, the ability to distinguish organic enthusiasm from manufactured hype is more critical than ever – but it’s only half the story. As the influence of fake profiles and disinformation tactics employed by state actors continues to grow – becoming not only more common but also harder to detect – the need for tools to combat these coordinated campaigns is more urgent than ever. Identifying the fake actors behind these campaigns, analyzing their behavior, and detecting the fake content they spread has become crucial to protect public discourse, trust, and perception. 

To learn more about comprehensive solutions to detect and combat coordinated fake campaigns, contact Cyabra.

The post DeepSeek Hype Fueled by Fake Profiles appeared first on Cyabra.

]]>
DeepSeek’s Breakthrough Sparks National Pride in China https://www.wsj.com/world/china/deepseeks-breakthrough-sparks-national-pride-in-china-4479804b?st=yj1jxx&reflink=desktopwebshare_permalink#new_tab Sun, 02 Feb 2025 12:02:13 +0000 https://cyabra.com/?p=15002 The post DeepSeek’s Breakthrough Sparks National Pride in China appeared first on Cyabra.

]]>

The post DeepSeek’s Breakthrough Sparks National Pride in China appeared first on Cyabra.

]]>
Cyabra Introduces “Insights”: Turning Complex Data Into AI-Driven Actionable Insights https://cyabra.com/blog/cyabra-introduces-insights-turning-complex-data-into-ai-driven-actionable-insights/ Sat, 01 Feb 2025 22:30:07 +0000 https://cyabra.com/?p=14755 Cyabra’s Insights empowers brands and government organizations to detect and understand online threats in real-time.

The post Cyabra Introduces “Insights”: Turning Complex Data Into AI-Driven Actionable Insights appeared first on Cyabra.

]]>
Cyabra’s Insights empowers brands and government organizations to detect and understand online threats in real-time, delivering actionable insights that once required the expertise of an entire team of analysts. Here’s how Cyabra makes it happen:

Too Much Information? 

In 2024, nearly every brand you know dedicates time, resources, and specialized roles to monitoring, analyzing, and understanding social media. Marketing teams, brand managers and strategists, crisis management experts, PR agencies, market researchers, customer insights managers, and growth officers – all these professionals rely on social media data and analysis on a weekly, daily, or even hourly basis.

Online attacks against brands have become increasingly frequent in recent years, causing massive financial and reputational damage. In response, monitoring and analysis tools have evolved, now offering an abundance of data: from sentiment to the age and location of those involved, from uncovering dominant narratives to identifying fake profiles spreading disinformation and manipulating social discourse.

While analysts and data scientists thrive on this precise, detailed, real-time information, the sheer volume of data can be overwhelming for most of us. Decoding complex data has become a time-consuming part of our daily work life.

This is where Cyabra’s Insights step in.

Cyabra’s “Insights” in action: detecting fake profiles manipulating the conversation

Navigating the Online Data Maze

Insights takes the overwhelming amount of data gathered by Cyabra’s AI, which continuously monitors and analyzes online conversations and news sites, and breaks it down into easy-to-understand answers and visuals.

With Insights, brands can uncover accessible, actionable results, understand key takeaways, and most importantly, spend less time on analysis and research, freeing up time to use the uncovered data more effectively.

Insights’ Essential Features include:

  1. Clear, Actionable Visuals: Insights reveals patterns, trends and key metrics, including sentiment, engagement, communities, influencers, geographic and demographic data, hashtags, and peak activity – all while sifting the real from the fake, providing a clear view of the authenticity of conversations.
  2. User-Friendly Q&A Format: Insights supplies answers to critical questions in seconds – sometimes, even questions you didn’t know you needed the answer to! Insights enables Cyabra’s clients and partners to make informed, confident decision-making, eliminating guesswork and allowing them to focus on the bottom line.
  3. Automated Disinformation Detection: Insights instantly identifies bots, fake profiles, deepfakes, manipulated GenAI content, toxic narratives, rising crises, harmful trends, and any other threats to brand reputation. 
Cyabra’s “Insights” in action: detecting the most viral narrative 

The Bottom Line in One Short Line 

Insights’ intuitive visuals and automated Q&As are designed around the most common queries and needs of Cyabra’s diverse clients across both private and public sectors. Insights help brands and governments to instantly uncover harmful narratives, detect fake accounts, and analyze how false content spreads – saving time and resources and supporting swift responses during critical moments, all without requiring technical expertise.

As we head into 2025, following the largest election year and a record year for disinformation, Cyabra is launching Insights at a pivotal moment. False narratives, fake accounts, and AI-generated content are spreading faster than ever, costing businesses and governments billions annually while eroding public trust and reputations. False news stories are 70% more likely to be shared than true ones, and experts predict that in the coming year, disinformation will become the top challenge for public and private sectors worldwide. With disinformation spiking during high-stakes events like elections, the need for rapid data analysis and response tools like Insights has never been greater.

“Clients often ask, ‘What’s next?’ when confronting disinformation,” said Yossef Daar, CPO of Cyabra. “Insights takes the guesswork out of the analysis, giving users a straightforward, visual way to see where false narratives are spreading, who’s behind them, and what’s driving engagement. This enables them to respond to digital threats faster and more effectively.”

Cyabra’s “Insights” in action: detecting the most viral narrative 

“Every second matters when countering disinformation,” said Dan Brahmy, CEO of Cyabra. “Insights turns vast amounts of data into clear, actionable knowledge, empowering our clients to uncover the real story behind the data and respond before the damage is done. It’s like having an expert analyst at your fingertips.”

During beta testing, Insights enabled:

  • A Fortune 500 company to neutralize reputational damage in minutes after detecting a disinformation spike about its CEO.
  • A government agency to uncover and disrupt hashtags fueling disinformation campaigns, enabling quicker interventions.

Insights is now available on Cyabra’s platform. To learn more about Insights and to see it in action, contact Cyabra

The post Cyabra Introduces “Insights”: Turning Complex Data Into AI-Driven Actionable Insights appeared first on Cyabra.

]]>
Misinformation and Disinformation and their impact on the future of PR and Communication https://amecorg.com/2025/01/misinformation-and-disinformation-and-their-impact-on-the-future-of-pr-and-communication-your-questions-answered/#new_tab Mon, 27 Jan 2025 15:46:50 +0000 https://cyabra.com/?p=14989 The post Misinformation and Disinformation and their impact on the future of PR and Communication appeared first on Cyabra.

]]>

The post Misinformation and Disinformation and their impact on the future of PR and Communication appeared first on Cyabra.

]]>