Cyabra Launches Deepfake Detection

Fake Profiles Fueled the Jaguar Backlash

Luxury car brand Jaguar faced a tidal wave of criticism online following a new advertising campaign. The protest appeared to be based on authentic, organic dissatisfaction, condemning the company for “promoting woke aesthetic over luxury and performance.” However, while analyzing the online backlash around Jaguar, Cyabra uncovered a fake campaign that was part of an orchestrated effort to tarnish the brand’s image and reputation. 

The magnitude of the attack against Jaguar is a stark example of how coordinated disinformation can ignite a firestorm, weaponizing social media platforms and inflicting significant reputational damage on major brands.

When an Online Crisis Shifts Gears 

The backlash against Jaguar started on November 19, when Jaguar released its new campaign, “Copy Nothing.” As the hashtags #BoycottJaguar, #GoWokeGoBroke began trending, accompanied by the derogatory #Faguar, fake profiles infiltrated the conversation and amplified the negative sentiment, creating the illusion of widespread discontent.

Cyabra monitored the massive rise in negative sentiment against Jaguar, which at its peak, on November 21, amounted to 80% of the conversation (with a 5:1 negative-to-neutral/positive posts ratio). 

Negative sentiment against Jaguar started rising on November 19 and peaked on the 21

Cyabra’s analysis revealed that 18% of accounts using #BoycottJaguar and 20% behind the #Faguar hashtag were fake, as part of a coordinated campaign that systematically weaponized hashtags like #GoWokeGoBroke to amplify outrage and escalate crises among companies. 

Even more striking, one of the predominant bot networks involved in the Jaguar backlash was not new to disinformation spreading: Cyabra identified that the same bot network was part of a recent disinformation campaign surrounding President-elect Trump during the presidential race. While election bots are often repurposed for the next political influence operation effort, the fact they were now casually and easily harnessed to attack Jaguar shows how likely it has become for brands to become victims of disinformation and fake profiles. 

Fake profiles using #BoycottJaguar to attack the brand

The fake profiles involved in attacking Jaguar did not only show the ability to utilize and enhance trending negative hashtags to manipulate the conversations: they also used the numerous negative media coverages to further amplify the backlash. An example of this tactic was an article in The Daily Wire that criticized Jaguar, titled “Like Watching a Car Crash: Jaguar’s Disastrous New Ad”. This article became a central element in the fake coordinated campaign attacking Jaguar on Facebook: of the hundreds of shares and reposts it gained, 52% were made by fake profiles, giving the article another push just as it was fading, and causing it to resurface and regain the interest of authentic profiles. The article, one of 3,788 articles that negatively discussed Jaguar, gained a total of 11,400 interactions. 

Fake profiles amplified the Daily Wire article, causing the trend to expand and last longer.

Jaguar’s automatic, generic responses added fuel to the fire, both by amplifying the negative comments and by not addressing the dissatisfaction. The X account @CanuckCrusaderX who responded to Jaguar’s post gained 3.4 million views, and played a significant role in promoting the calls for boycott. Fake profiles also took part in amplifying this viral post. 

@CanuckCrusaderX call for a boycott going viral

Can Brands Exit the Fast Lane to Disinformation?

The Jaguar case study illustrates a harsh truth:

  • Fake profiles are potent tools for shaping narratives and influencing public perception.
  • Disinformation spreads rapidly, often outpacing a brand’s ability to respond effectively.
  • Reputational damage can occur in hours, with long-lasting consequences for brand value and trust.

Tackling online backlash has always been a challenge for brands. The changes in the political climate and the rise of online criticism, combined with the fear of being “canceled,” have caused many brands to take extra caution with their marketing strategy and steer away from political topics. 

However, when fake profiles are involved, there really is no way to stay safe against online issues and backlash. Fake profiles can latch onto any hashtag, any false narrative, any slightly trending issue – and transform it into a major reputational and financial crisis in a blink.

In this new playing field for bad actors, classic crisis management methods have become irrelevant. Brands must adopt proactive measures to safeguard their reputation, and engage in continuous monitoring of online attacks and disinformation campaigns. This method is most useful when using AI disinformation detection tools, which can both detect toxic narratives and online attacks, but more importantly, analyze the forces behind it, identify the fake profiles involved in the discourse, and detect their influence on public discourse. 

Contact Cyabra to learn how to better protect your brand against online manipulation and prevent reputational and financial damage. 

Related posts

Bot Networks Amplify Controversial Voices

Cyabra tracked key influencers on X (Twitter) and uncovered that, on average, 25% of their followers were fake. One thing all those influential figures had...

Man speaking on stage to an audience of humanoid robots, with floating social-media like and heart icons and the words Putin, Ukraine, Iran, China in the background, symbolizing bot networks amplifying controversial topics

Rotem Baruchin

May 2, 2024

A Bumpy Road: Bots Attack Autonomous Car Waymo

Google’s self-driving car company, Waymo, has recently encountered a series of collision incidents that have raised safety concerns and led to a product recall.  Criticism...

Giant robot figures symbolizing online bots loom over congested city traffic, pelting autonomous cars with angry-face and thumbs-down icons to portray a coordinated negativity campaign against self-driving vehicles

Rotem Baruchin

March 4, 2024

Misinformation Monthly – January 2024

Each month, our experts at Cyabra list some of the interesting articles, items, essays and stories they’ve read this month. Come back every month for...

Glowing blue and purple cyber-style illustration of a hand on a laptop with a holographic interface, overlaid with the words “Misinformation Monthly” and “Cyabra Reading List”

Rotem Baruchin

January 4, 2024