DeepSeek, a new AI developed by a Chinese startup, topped app download charts, triggered a trillion-dollar market loss in the US, and has been a source of inescapable online hype that seems to grow bigger and bigger with time.
However, Cyabra’s latest research reveals that much of this excitement isn’t organic. In fact, it’s part of a coordinated campaign powered by fake profiles. Furthermore, those coordinated fake profiles exhibit behavior that is usually attributed to Chinese bot networks.
Coordinated disinformation campaigns led by foreign state actors have multiplied in recent years. With the rise of AI tools, they became part of an evolving playbook used to influence public trust, markets, and even global policymaking.
Here’s the full story:
TL;DR?
- A coordinated network of 3,388 fake profiles strategically promoted DeepSeek, employing tactics that suggest potential involvement by the Chinese government.
- 15% of the profiles discussing DeepSeek on X were fake – double the usual number of fake accounts on social media, and generated 2,158 posts and comments.
- 44.7% of those fake profiles were created in 2024, aligning with the timing of DeepSeek’s launch.
The Tactics of DeepSeek’s Digital Cheerleaders
With its rising influence, particularly on the US stock market, DeepSeek became the subject of significant online engagement the moment it was launched.
Between January 21 and February 4, Cyabra conducted a large-scale analysis of 41,864 profiles discussing DeepSeek-related content across major social platforms. 3,388 profiles were identified as fake. Most of them were active on X, where fake profiles accounted for 15% of engagement – double the typical rate on social media.
The inauthentic accounts promoting DeepSeek were not operating independently: in fact, it was a coordinated network working in sync, actively pushing positive narratives to amplify DeepSeek’s hype, creating the illusion of widespread excitement and adoption. Through thousands of posts, comments, and shares, those fake profiles had a massive impact on social discourse. On February 3, the day of peak activity, fake profiles generated 2,158 posts in a single day.
The fake profiles employed two primary tactics:
1. Amplifying each other by interacting within the network to create the appearance of broad, positive engagement.
2. Integrating into authentic conversations, interacting with genuine users who were unaware they were engaging with bots

In the picture: The two methods employed by fake profiles: Interacting with other fake profiles Vs. integrating into authentic conversations.
Another tactic fake profiles employed to maximize exposure was engaging with high-visibility posts from authentic profiles. For example, one widely viewed post by @FanTV_official, which amassed over 480,000 views, was flooded with coordinated DeepSeek promotions. By inserting comments into already-popular discussions, the fake profiles increased credibility and ensured their content reached a broader audience. This tactic – piggybacking on trending posts to amplify fake engagement – has become an emerging strategy in online influence campaigns.
The coordinated profiles primarily posted in English and claimed to be based in the US and Europe. However, their synchronized activity suggested they originated from a single source. Cyabra also detected frequent mentions of China as the origin of DeepSeek, seemingly intended to attribute credit and foster positive sentiment towards China itself.
As the artificial positive sentiment grew, various networks of fake profiles began exploiting the #DeepSeek hashtag for their own purposes. One network used the hype to promote scams, encouraging users to purchase tokens, while another leveraged the buzz to promote PublicAI, a competitor to DeepSeek, by citing a recent security breach on the platform.
In the picture: A second and third campaigns of fake profiles used #DeepSeek hype to push competitors and promote scams.
The Anatomy of a Fake Profile
Fake profiles in the discourse exhibited clear telltale signs of a coordinated bot network:
- Avatar recycling: Multiple fake profiles used the same profile pictures, often generic images of Chinese women.
- Recent creation dates: 44.7% of these fake accounts were created in 2024, aligning with DeepSeek’s rise.
- Synchronized posting: Fake accounts posted simultaneously to maximize visibility.
- Identical content: Many accounts copy-pasted identical praise-filled comments.
These characteristics are consistent with the typical behavior of Chinese bot networks. By acting as a coordinated front, these accounts created an illusion of authenticity and virality, while much of the enthusiasm was, in reality, artificially engineered.
In the picture: Fake profiles in the DeepSeek discourse. Notice the identical posts (top right).
Artificial Hype, Real Risks
DeepSeek isn’t just the story of a new and exciting AI model. It’s a story of influence: of shaping public perception through a meticulously designed, premeditated influence operation. Malicious actors are exploiting real-world events, preying on heightened emotions, and weaponizing social media to fuel anger, instill fear, and deepen societal divisions. The same disinformation tactics once used to sway elections and incite protests are now being deployed to shape the AI arms race.
In this case, the ability to distinguish organic enthusiasm from manufactured hype is more critical than ever – but it’s only half the story. As the influence of fake profiles and disinformation tactics employed by state actors continues to grow – becoming not only more common but also harder to detect – the need for tools to combat these coordinated campaigns is more urgent than ever. Identifying the fake actors behind these campaigns, analyzing their behavior, and detecting the fake content they spread has become crucial to protect public discourse, trust, and perception.
To learn more about comprehensive solutions to detect and combat coordinated fake campaigns, contact Cyabra.