#BoycottSephora: A Case Study

How a Brand’s Image can be Destroyed in Seconds

On January 29, makeup influencer Amanda Ensing shared a sponsored video featuring Sephora products on Instagram. What next transpired was a classic example of a brand crisis created within seconds. Within minutes of her posts, social media users discovered previous posts of Ensing’s promoting the riots on capital hill. The hashtag #boycottsephora began to trend, with calls for Sephora to denounce Ensing. Upon Sephora doing so, the hashtag #boycottsephora flipped, criticizing Sephora for being anti-conservative, freedom of speech. Christians and Latinos, to name just a few.

What happened here is what we call a snowball effect– when just one post “snowballs” and spreading a message, creating great influence and impact.

Here’s what Cyabra unravelled:

Seconds after Ensing’s promotional post went live, followers discovered the following tweet in support of the Capitol Riots.

Immediately, calls to boycott Sephora trended with the hashtag #boycottsephora, demanding that the makeup company sever ties with Ensing.

Soon after, Sephora announced that they would no longer be working with Ensing, severing ties with her.

However, the damage was already done, with only more to follow. With Sephora’s decision to cut ties with Ensing, her supporters took over the #BoycottSephora hashtag on social media, completely reversing the narrative- this time the company was shamed for being anti-conservatives.

According to Cyabra’s analysis, the words most associated with the hashtag #boycottsephora were Maga, Christians and Conservatives.

After analyzing influential Instagram personas participating in the conversation, Cyabra identified Emily Sarmo, a fashion blogger, with a 17.2K following. Her post criticized Sephora for anti-freedom of speech sentiment and received nearly 500 likes. Instagram page “Latinos With Trump” was an influential voice as well, calling on followers to boycott Sephora for severing ties with Ensign.

To add more fire to the conversation, real and fake profiles were created just for the sake of this hot topic. An Instagram page titled “F*** Sephora” was created, citing that the company discriminates against Christians and conservatives.

 

View this post on Instagram

 

A post shared by @boycottsephora

Not only were there calls to boycott Sephora’s, but both side of the political aisle called on users to switch to Ulta, one of Sephora’s major competitors.

As demonstrated, with one video posted on Instagram, Sephora lost consumers across the political spectrum. Not only was there backlash with right and left followers, but apolitical posts criticized the company for taking sides and limiting the right for people to express their views. As we can see, posts like these snowball at an ever-increasing pace in today’s media network, irreversibly destroying brand names.

Vaccine Disinformation: Part Two

Download the report here

In ongoing efforts to protect the public from COVID-19 disinformation, Cyabra continuously analyzes the thousands of conversations taking place across social media regarding COVID-19 vaccines. Following the first report of our vaccine disinformation series, part two of the series contiues to highlight disinformation regarding vaccine side effects as well as campaigns undermining effectiveness of specific vaccines.

Findings

The segmentation of the 390,000 profiles that Cyabra scanned for this report is presented below. Profiles that used negative sentiment text are labeled as “Bad Actors” by Cyabra. Facebook is the social media platform with the highest percent of fake profiles that referenced the COVID-19 vaccine. Facebook also has the highest percent of profiles that used negative sentiment language in referencing the vaccine. However, a greater number of coordinated online campaigns were could be found on Twitter in comparison to Facebook.

 

Cyabra analyzed the content of both real and fake profiles and detected several tweets discussing the same topic. The tweets refer to a case in Boston where a doctor who took Moderna’s vaccine had allergic reactions. Cyabra sampled 1,976 profiles that participated in this discourse, 7% of which are fake profiles. Below are examples of the retweets by fake profiles Cyabra scanned.

Cyabra also analyzes the connections between real and fake profiles and divides them into communities. Cyabra’s “community” function highlights profiles that are highly connected in various forms. Cyabra analyzed the behavior of the scanned profiles and divided them into communities that contain profiles with similar behavior. The division of the profiles is based on many reasons, such as the number of friends and followers, the profile creation date, and the absence of picture profiles.

Cyabra found one community of fake and real profiles with similar behavior tweeting positively about AstraZeneca’s vaccine. The community contains 165 profiles, 94% (155 profiles total) within it are fake. The images below show the community of the fake and real profiles spreading positive content about AstraZeneca’s vaccine and a few examples of said content.

Below is another example of a Cyabra community; this one is based on followers. The profiles inside this community follow each other to a high degree. Often, a community of profiles following each other indicates they are spreading the same message. Most profiles in this community expressed criticism of the Russian vaccine “Sputnik.” Cyabra analyzed the connections, behavior, and text of these profiles and discovered many Bad Actors (profiles that used negative sentiment language). The Twitter community contains 647 profiles spreading negative content about Sputnik’s vaccine, claiming it’s dangerous. Below is an image of the community against Sputnik and examples of content shared by its profiles.

Conclusion

Cyabra continues to monitor the online discourse surrounding the COVID-19 vaccine on several social media networks with its proprietary AI-powered platform. This report is a part of an ongoing effort to map potentially harmful social media orchestrations, and discussed initial results found by Cyabra on Facebook, Twitter, and VK. Cyabra identified over a hundred fake profiles contributing to an online discussion about a Boston doctor who experienced a harsh response to Moderna’s vaccine. Additionally, Cyabra also found several interactive online communities using fake profiles to spread their message. One community of fake profiles is spreading claims favorable to AstraZeneca and its COVID vaccine. Another community discussed the possible dangers of the Russian vaccine, “Sputnik.” Cyabra will monitor the response to the vaccines rollout on social media on an ongoing basis and pinpoint and report on inauthentic behavior online.

 

Vaccine Disinformation- What Does it Really Mean?

Vaccine Disinformation- What Does it Really Mean?

As the world begins to deploy COVID-19 vaccinations, the news is flooded with articles about the threat of vaccine disinformation. These articles claim that fake news and disinformation surrounding the vaccines is rampant and influencing the public’s view of the vaccine. However, what remains unclear is where this disinformation comes from, how it spreads and what the motivations are behind it?

We’ll tackle each of these questions step by step.

Social Media as a news source

Let’s first talk about the medium for where these conversations take place. As we know, social media today has significantly impacted the media landscape. The days of checking newspapers, local media outlets, or even simply google, are nearly obsolete for much of the public. Today, social media platforms, such as Facebook, Twitter and Instagram, have taken over news sources. This means that anyone on these platforms is now a source of news.

Social Media Users

Now, we can talk about who these sources are and their influence and reach.

There are the people that you know personally- your friends, family members and acquaintances. These may have similar outlooks as you- you likely have what in common. Oftentimes, we trust these sources- you know where they come from and what they represent.

Public figures are another category. These can be celebrities, politicians, business leaders or influencers. These figures can have the power to influence the decisions that we each make each day- this can be anything from political views to the shampoo that you should use.

We also have social media accounts used by media outlets that share their articles. While reliable media outlets are active on these platforms, non reliable sources have the ability to become just as active and influential.

Lastly, we have fake profiles. These include fake users, avatars, bots, sockpuppets and actual real people who are paid to promote specific agendas. These profiles are used to spread and amplify messages or information.

While fake profiles are seemingly the more obvious source of false information, the other categories can often be equally, if not more, dangerous. Public figures, influencers, friends and acquaintances are real people that are sources of any and all information- real or fake.

See where the problem begins?

Now that we understand the different sources of information, we need to understand what these sources are capable of.

Fake news, Disinformation and Misinformation

To further understand the problem, we need to define what fake news, disinformation and misinformation, a less popular phrase, really means. While these all fall under the category of false information, each phrase is distinct and carries important differentiations that are often missed.

Misinformation is false information that is spread deliberately or accidentally, regardless of the motivations behind it.

Disinformation is false information that is deliberately biased, fabricated or misleading. This can be manipulated narratives, facts or propaganda.

Lastly, fake news is false information that is purposefully created in a format that copies mainstream media to spread misinformation, conspiracy theories or hoaxes. Sometimes, these messages are not directly fabrications, but rather sensational, emotionally charged or misleading false information.

Each of these examples of false information pose threats. While some false information on social media can be spotted (that’s a conversation for another blog!), the more “trusted” a source is, the higher the risk

Types of Vaccine Disinformation

Now that we understand both the different categories of social media as well as the different types of false information, we can return to our original topic- vaccine disinformation.

As we know, one of the most globally discussed topics is the pandemic, creating an overabundance of information. This means that parties interested in this topic- and I believe it’s safe to say that this is a large percentage of the population- are susceptible to consuming false information.

As the pandemic conversation has turned to vaccines, we are witnessing an increase in the spread of false information. Some of this information is spread both by malicious social media users as well as altruistic users.

Here are some examples of the the reasoning behind the social media users spreading COVID-19 vaccine misinformation:

  1. Exploitation: People want to make money. By hooking readers into information about the vaccine, they can advertise their own agendas in connection to vaccines. For instance, Cyabra identified a campaign of fake profiles on Facebook that exploited the online interest in COVID-19 to advertise Bitcoin.
  1. Answers: When times are hard, people want answers. They look for comfort in reasons and explanations about what is happening. This is a feeding ground for conspiracy theories to thrive, which has led many to believe that the virus is a way for world leader’s to control their citizens.
  1. Community: People like to feel that they belong. They seek connections and by attaching themselves to an idea or opinion and engaging in social media conversations, they become a part of a community. Many social media users will seek out these communities to feel a purpose.

COVID-19 has brought out the vulnerability in all of us. Unfortunately, this leads to an environment where false information spreads, and spreads fast. And social media can be a hotbed for this.

The Dangers

Now that we understand what vaccine disinformation really means, it’s important to highlight its dangers. Put simply, mistrust and confusion causes vaccine hesitancy, leaving efforts to end the pandemic that has killed 1.85 million people in severe danger.

The infodemic is our latest public health threat, and we need to stop it.

Cyabra is actively monitoring the social media conversations surrounding COVID-19. Our latest report on vaccine disinformation details our findings.

Vaccine Disinformation: Part One

Download the full report now

With COVID-19 vaccines currently being developed and deployed worldwide, the public now faces its next public health challenge- that of disinformation.

In efforts to shed light on some of the ongoing fake campaigns circulating on social media, Cyabra used its AI based solution to scan these conversations to gain a deeper understanding of these campaigns. Focusing on Facebook and Twitter, Cyabra analyzed the online behavior, connections, and messaging of 132,000 profiles. Here, we found nearly 18,000 fake profiles (13.5%). Based on Cyabra’s experience with disinformation campaigns, this percentage of fake profiles indicates the presence of an online disinformation campaign. Cyabra typically encounters around seven to ten percent of fake profiles.

Findings

Cyabra’s tools analyze the connections between each profile in order to understand the impact and reach of each profile. The image below is a cluster from Cyabra’s dashboard representing the main profiles participating in the vaccines discourse on Twitter and the manner in which they are connected. The red nodes represent fake profiles; the green nodes represent real profiles. The bigger the node, the more connections the profile has.

While the visual link analysis depicted above shows all of the profiles that interacted with one another (following, replying, or retweeting), the images below represent segmented profiles, otherwise known as “communities.” Cyabra’s “community” function highlights profiles that are highly engaged with each other and share the greatest number of connections and, often, a common theme. The themes that Cyabra uncovered relating to COVID-19 vaccines are presented in the images below. Analyzing all of the fake profiles, Cyabra discovered three Twitter communities comprised of fake profiles tweeting three distinctive sets of messaging. Two of these fake campaigns actively spread favorable tweets about AstraZeneca’s vaccine while criticizing other companies developing vaccines. The third fake campaign disputes the existence of COVID-19 and attacks the utility of all COVID-19 vaccines.

Community A: Anti-Vaccine

The anti-vaccine community contains 136 fake profiles that are actively spreading negative content against all COVID-19 vaccines.

Community B: Pro-AstraZeneca (1)

Community B contains 239 fake profiles circulating positive content about AstraZeneca’s vaccine progress and positive content about the company.

Community C: Pro-AstraZeneca (2)

Community C also praises AstraZeneca but spreads harmful content surrounding Pfizer’s COVID-19 vaccine. This community contains 220 fake profiles.

Cyabra did not find a significant difference on Facebook between topics discussed by real profiles and the ones discussed by fake profiles. However, a trending topic that stood out amongst fake Facebook participating in the COVID-19 vaccine discourse was Bitcoin. The image below represents an example of the system’s classification of subjects, with subjects used by real profiles shown in green and ones used by fake profiles shown in red.

Figure 7 – Cyabra’s topics division into real and fake profiles from the system 

While the real profiles did not discuss anything relating to Bitcoin, fake profiles used the subject of Bitcoin numerous times. Cyabra analysts found that these fake profiles spread content on Bitcoin for advertising as a part of a fake campaign.

Below is an image from Cyabra’s platform showing the connections between the fake profiles on Facebook that posted about Bitcoin in discussions relating to COVID-19 vaccines. The fake campaign is taking advantage of the online interest in COVID-19 vaccines to promote a Bitcoin website. The fake profiles identified posts related to COVID-19 vaccines with high engagement and replied to them with promotional content about Bitcoin sites and Telegram groups.

Aside from communities, there is also merit to doing a deep dive of the most influential profiles. In a disinformation campaign, there are typically three types of profiles with the highest impact in a fake campaign: 1.The most content: The more posts, replies and shares a profile creates, the more influence it has in shaping the conversation, both within the campaign and with profiles that are only partially connected to the campaign. 2.The most connections: The more connections a profile has, the more it can control what people within the campaign see. 3.The most engaged: Profiles that are the most active in a campaign can shape the way people who are new to the subject perceive it.

Cyabra marks the connections between fake and real profiles to emphasize which fake accounts “break through” the fake profile sphere and can influence real profiles. Understanding which fake profiles have the highest number of real connections is another method to understand which fake profile has the most influence.

The most connected fake profile on Twitter is esme_hornbeam, with 63 fake connections. The system labeled the profile as fake due to a large percentage of its content being retweets, and its bot-oriented behavior. To identify any content possibly linked to the subject, Cyabra extracted multiple tweets that the profile tweeted about the COVID-19 vaccines.

Figure 9.1 – The most connected fake profile

Conclusion

With COVID-19 vaccines currently being developed and deployed worldwide, the public now faces its next public health challenge- that of disinformation. In efforts to shed light on some of the ongoing fake campaigns circulating on social media, Cyabra scanned 132,000 accounts where advanced AI uncovered multiple, harmful agendas. Within this sample, 18,000 profiles were fake, resulting in a significant reach of each of these disinformation campaigns. With two of the fake Twitter campaigns favoring the AstraZeneca COVID-19 vaccine, the misleading agenda intended to promote the vaccine as superior to the other COVID-19 vaccines in development. The third Twitter disinformation campaign, and perhaps the most dangerous, attacked the existence of the Coronavirus and all COVID-19 vaccines, claiming that the Coronavirus does not exist and is a guise by governments planning to control human actions. Additionally, Cyabra identified another campaign of fake profiles on Facebook that exploited the online interest in COVID-19 to advertise Bitcoin.

As disinformation becomes a more prevalent threat, Cyabra continues to monitor the disinformation surrounding COVID-19.

Chinese Disinformation Campaign on Twitter

Download the full report

Background

With a long history of diplomatic tension between China and Australia, relations in the first half of 2020 significantly deteriorated. In April 2020, Australian Prime Minister Scott Morrison called for an international investigation on the origins of the coronavirus, leading the Chinese government to dub this proposal as political manipulation. Since then, the two countries have entered a fierce trade war. Most recently, China imposed up to a 212% tariff on Australian wine imports, effectively cutting Australian winemakers from their largest market.

On November 19, Australia released a long-awaited report alleging that Australian troops had committed at least 39 unlawful killings in Afghanistan during the war. Tensions between Australia and China erupted further this week over the subsequent tweet by Zhao Lijian, a Chinese foreign ministry official and spokesman, who tweeted a fake photo of an Australian soldier holding a knife to the throat of an Afghan child. The text beneath the photo reads: “Don’t be afraid, we are coming to bring you peace!”

Cyabra conducted a randomized sampling of the profiles that engaged with this tweet, uncovering an orchestrated disinformation campaign originating from China.

Analyzing nearly 1,500 profiles that engaged with Lijian’s tweet, Cyabra detected that 57.5% of the accounts that engaged with the tweet are fake accounts working together to spread the harmful narrative published by Lijian.

The majority of these profiles were flagged as fake by Cyabra for many reasons, including the unusually low quality of the profiles analyzed.

Upon further analysis, Cyabra discovered that many of these accounts were only created in November 2020 and have one single tweet – a retweet of Lijian’s original tweet for the purpose of amplifying this post. The images below represent examples of these low quality, fake accounts created in November.

Lijian’s tweet reached an exceptionally high number of profiles due to the amplification of the post by the large network of fake accounts engaging with Lijian’s tweet.

Cyabra’s platform also analyzed the connections between these profiles. Below is a visual of the “cluster” surrounding the tweet and its engagement, in which the red nodes represent fake profiles and the green ones are real profiles.

Conclusion

Given the extraordinarily high number of fake accounts that engaged with Lijian’s tweet, it is clear that this is an orchestrated disinformation campaign. This network of profiles worked together to spread and influence the wider public. Cyabra’s experts conclude that there is a high probability that a single, malicious entity (possibly a state actor based on the nature of the online campaign) orchestrated and managed all of the fake activity surrounding this tweet.

Fake news spread on social media does real harm to our divided society | Opinion

By Scott Mortman, Senior Advisor at Cyabra

When President Trump announced on social media that he had been diagnosed positive for the coronavirus, a huge number of people on social-media questioned whether the news was real. To counter this “fake news” skepticism, the White House released a video and photos of the president taken at the hospital. In the past, this would have ended most suspicions. But in the rapidly developing world of “deepfakes,” many took to social media again to point out perceived discrepancies in how the president, his speech and his surroundings could have been manipulated.

Neither fake personas nor deepfakes — real people whose image, speech and/or surroundings have been altered — are creations of this century. They have been used to influence others in non-digital format since at least biblical times. What has changed is the medium used to spread these forms of disinformation.

Only in this century have we created a globally accessible platform to immediately transmit false information, disseminated by people who may not exist or appear as they are. And we are harmed by our inability to distinguish who is real or fake/deepfake, and to separate true narratives from false ones — as quickly and intensely as those spreading disinformation. The communication of agreed-upon facts forms the basis for any well-functioning society. Without a common narrative, divisions will widen.

In “The Social Dilemma,” a Netflix documentary, social-media engineers discuss how the technologies they created have been designed to optimize the time we spend online. By feeding us content and ads targeted to each user’s preferences, social media relies on artificial intelligence (AI) to reinforce these preferences. By eliminating posts and opinions that run counter to our presumed or expressed beliefs and likes, we enter into a “tech bubble” free of contradiction and conflict and in which our addiction to social-media content grows.

Yet our “social dilemma” is being exploited further. While those behind the social media platforms are working to feed our tech addiction, there now is a virtually uncountable — and unaccountable — number of fake users hijacking these technologies to push their targeted messaging to us. These fake online users have been created for one purpose — to manipulate and influence the opinions of real people online.

Regrettably, they perform quite well.

Recent studies conducted by the company I advise show that fake profiles increasingly constitute more than 30 percent of those engaged in online discussions. We also are tracking more active uses of deepfake technologies to manipulate opinion and cast doubt on our perceptions of reality.

Until now, the main purveyors of deepfakes have been pornography websites. As deepfake technologies advance, however, manipulated photos and videos of political candidates, government officials, business leaders and others in the public realm are increasing. Going forward, it is conceivable that the person with whom you are chatting on a live Zoom video conference may be a manipulated image engineered in real time.

If we are to overcome what separates us as a society, then we must find a better way to verify the truth of information shared online. As are radio and television, social media is a public forum that requires some measure of public oversight and protection. These platforms can benefit us and already inform too much of our daily lives to expect people to disconnect. At best, we may reduce our use of social media. We also may demand more of those who operate our social media.

Otherwise, as people have learned this year across the globe, what “goes viral” can harm us more than we imagine.

To view the original article, click here.