It’s Not About The Flag: Brands Attacked For Rainbow Washing

If you live in any major western city during June, aka Pride Month, you’re probably well accustomed to the mass of rainbow flags flapping in the wind on every window, balcony or storefront. In the current climate, it’s considered good marketing for companies to show their support for the LGBTQ+ community. It’s common for brands to give their logo the rainbow-makeover without a second thought. But what happens when consumers realize that those very same brands actually fund and support anti-LGTBQ politicians? 

Brands That Live in Glass Houses Shouldn’t Throw Rainbows Around 

This common practice of exploiting the rainbow flag by putting on a facade of inclusivity is called Rainbow Washing (or occasionally Pinkwashing and Rainbow Capitalism): The act of using or adding rainbow colors to advertising, apparel, accessories, landmarks, etc, in order to indicate progressive support for LGBTQ equality (and earn consumer credibility) – but with a minimum of effort or pragmatic result.

As public awareness grows, brands that take part in Rainbow Washing are quickly faced with a new reality: their consumers, whether they’re LGBTQ+ or allies, are holding them accountable for their actions. Changing a brand’s logo and taking some photos of the team waving rainbow flags can put a spotlight on the brand’s actions. Present-day consumers require authenticity, and in Pride Month, this means support, not empty words. One could say that in today’s age of information, any major brand is actually living in a glass house. Every word said and every action taken are constantly held up to scrutiny under the watchful eye of the public. Customers don’t need to be social activists to easily uncover any dark secrets hidden behind a rainbow logo.  

Using the Cyabra platform, our analysts detected an interesting trend while studying some of the biggest brands that have posted pro-LGBTQ+ messages on their social media. Wary consumers quickly came up with evidence revealing that some of those companies have actually donated a sum of over $10 million to anti-LGBTQ+ politicians and organizations. The irony of plastering the pride flag on those brands did not go unnoticed by social media. Deloitte, AT&T, Walmart, FedEx, Charter and many others were subjected to mocking responses and retweets calling them out for hypocrisy. 

Visibility Vs. Authenticity

Mark this in your calendar, brands: 2022 is the start of a new era, an era in which we all have to think and plan every word written on social media, because your audience will hold you accountable for your actions. Ten years ago, every brand that gave visibility to the rainbow flag during Pride Month was celebrated. Nowadays, visibility is a habit, and the name of the game is authenticity. Rainbow Washing brands whose message was perceived as inauthentic were met with a large, passionate, and very authentic wave of consumer rage. 

AT&T

Sharing their VP’s emotional coming out story, AT&T never expected to be called out and threatened with a boycott, especially not from a pro-LGBTQ+ audience. Their tweet reached no less 782,000 profiles, engaged by 391 profiles. 96% of this engagement (likes, comments and shares) was negative, exposing more and more consumers to AT&T’s support of anti-LGBTQ+ politicians. 

Deloitte

Deloitte wasn’t one of those brands plastering on a rainbow-colored logo and a “love is love” post: the company actually put many work hours and money into an inclusivity survey for Pride Month, interviewing no less than 600 LGBTQ+ employees (none of them Deloitte employees) about their everyday life at work. The impressive survey quickly started trending, but not anything like Deloitte dreamed it would: Deloitte’s Tweets spread to 532,000 profiles on Twitter, and were engaged by 266 profiles. But here’s the real shocker: 92% of those retweets, replies and posts concerning Deloitte showed a negative sentiment, pointing out the irony of creating a pro-LGBTQ+ survey while supporting anti-LGBTQ+ politicians and organizations. 

Walmart

Unlike Deloitte and AT&T, Wallmart’s social account didn’t feature any emotional stories or well-thought surveys – only a cute puppy dressed for Pride. However, having donated 1M$ to anti-gay politicians, even the adorable rainbow-wrapped pup couldn’t save Wallmart from the raging social media. The tweet spread to 580,000 profiles and was engaged by 290 profiles. 89% of this engagement was negative, proving that brands can be called out for Rainbow Washing whether they (allegedly) try to make a difference, or just selling some colorful pet accessories. 

The Unflattering Colors of the Rainbow 

Deloitte, AT&T and Walmart weren’t the only brands to find themselves in the heart of a Rainbow Washing storm: 

  • When Fedex tweeted their support, 120 profiles showed an engagement that was 96% negative, bashing Fedex for their donations to anti-LGBTQ+ politicians and organizations. 
  • Charter and Comcast, both offering LGBTQ+ content on their services for Pride Month, experienced a similar fate: 
    • 288 profiles engaged with Charter’s tweet, with 277 comments and quotes – 92% of them negative. 
    • Comast had 173 profiles engaged with their tweets, 82% of them are negative. 
  • The Home Depot, that tweeted a greeting for Pride Month, was engaged by 215 profiles, We found 96 comments and quotes on their tweets – 94% of them are negative. Home Depot tweets spread to 430,000 profiles on Twitter. 

As 2022 Pride Month is nearing its end, perhaps it’s time for brands to realize that they can’t play both fields: if they wish to carry the pride flag, they need to be worthy of it. Their new consumers are active, observant and alert, and they demand honesty and authenticity. Consumer hate might not be able to stop Rainbow Washing, but it’s more than willing to give brands hell for it. 

 

Want to read the full report written by Cyabra’s analysts? Contact us to receive it, free of charge!

Cyabra’s report is based on Twitter data collected between June 1 and June 22.

Fear the Day You Trend on Twitter? We’ve Got You Covered. 

Cyabra measures impact and authenticity in social networks. We offer global brands, corporations, and government agencies the ability to understand narratives, discover trends, and reach real audiences. Our AI software can monitor online conversations and determine where content is coming from, ultimately allowing your company to plan ahead based on real-time trends, and terminate disinformation problems before they grow out of hand.

Contact us to learn more!

 

Learn More about Rainbow Washing: 

 

 

Public Sector Protection: GSA Greenlights Cyabra Deployment for U.S. Agencies

Fighting disinformation has become a matter of national interest. Look no further than the COVID-19 pandemic: bad actors are pushing falsehoods online to dissuade people from getting vaccinated, which has very real consequences for the rest of the country.

Moreover, with continual social media uproars over topics like election results, it is clear that the public sector needs the ability to find and act against  disinformation. That is why Cyabra is proud to announce our availability on the General Services Administration (GSA) Multiple Award Schedule (MAS).

What is the GSA?

The U.S. General Services Administration provides contracts through which agencies can buy goods and services from vendors. This streamlines the government procurement process. The GSA can be used by any federal agency, as well as certain state and local entities, making it one of the most widely-used United States government contract vehicles.

What does Cyabra offer?

With this addition to the GSA schedule, U.S. Government Agencies will be able to easily purchase Cyabra’s technology to target, track, and react to  disinformation online. Cyabra is the first to offer a combination of social sentiment analysis with deepfake and disinformation detection in one SaaS platform.

Cyabra’s offerings include:

  • Fake profile behavior identification
  • Real-time social media sentiment analysis
  • Automated reporting
  • Deepfake detection
  • Tracking and analysis of how disinformation spread

Cyabra’s technology allows public sector clients the ability to be proactive in fighting disinformation online. Agencies use Cyabra’s platform to trace fake news and deliberately false information back to the source, preventing nefarious campaigns from affecting public discourse.

Are you a federal, state, or local agency looking to hear more about Cybra’s disinformation detection and online monitoring services for the public sector? Contact us to learn more or click here to be redirected to our GSA portal.

 

 

Disrupting Disinformation: Cyabra & Lew’Lara\TBWA Announce Social Monitoring Partnership

 

Brazilian PR agency Lew’Lara\TBWA is known for representing brands that disrupt the creative space.  Like when they launched the new Nissan Versa by covering it with stickers of tweets complaining about sedans.  They challenged the preconceived notions about the car segment and drew major attention for Nissan in Brazil.

In that campaign, TBWA proved that it keeps its ears to the ground and pays attention to online narratives.  It is only fitting that an agency with such attention to detail would partner with a platform like Cyabra’s that will allow them to identify, analyze, and understand online dialogues.

TBWA disrupts markets, Cyabra disrupts disinformation

Through this partnership, our artificial intelligence technology allows TBWA-Brazil’s brands, including Nissan and Gatorade, to monitor online narratives, analyze content, and track the reach and impact of social campaigns.  This includes being able to determine the authenticity of authors and delineate between genuine engagement and fake content.

With the rise of bots and deepfake technology, social media and content-sharing channels are getting clogged with disinformation that can become disastrous for public relations if allowed to fester.  Cyabra’s analytics platform lets TBWA and other agency clients monitor any online conversations that mention their brands.  They can gather insights into the origins of the content and influential authors involved, be they automated bots, disingenuous trolls, or cleverly-fabricated “sock puppet” profiles.  Agency leadership can then assess these efforts and take appropriate measures to respond, alter messaging or campaign strategies and protect their client’s image in the public sphere.

Cyabra’s software also tackles genuine content created by real people.  It empowers agencies like TBWA to keep their finger on the pulse of what customers are saying in digital spaces.  This social listening capability lets them learn both what their audience thinks about their brand, products and promotional efforts as well as how they usually engage with the company online.  By knowing what’s normal and expected for their audience, TBWA and their clients can then engage in meaningful ways and spot changes in sentiment quickly and more effectively.

Finally, with Cyabra’s agency services, clients like TBWA can track how far social conversations reach and trace where the content started.  Are people talking about this because of something they saw in the news or from a competitor’s post or a faceless fake account?  How far did it reach and how many people joined the discussion?  This type of information is vital in planning how an agency should approach a topic and for crafting future, targeted campaigns.

Marketing with Cyabra – Knowing Your Audience

Cyabra’s social analytics platform offers PR and marketing agencies the ability to find the truth in what is becoming an evermore cluttered digital space and empowers them to become the foremost authority on their target audiences.  Our AI software gives agencies essential tools to monitor what is really going on in online conversations and track where content is coming from, ultimately allowing them to plan ahead based on real-time trends and terminate disinformation problems before they grow out of hand.

Are you a PR or marketing agency looking to hear more about Cybra’s disinformation detection and online monitoring services for agencies? Read more about the partnership here or contact us to learn more!

Social Media Narratives Impact on Moving Markets

Brokerage firms are one of the few broad classes of organizations that have yet to prioritize social media strategies, but a swift change is expected after the financial services industry watched as everyday investors gathered online to surge GameStop’s stock. The conversations that originated within online communities and rapidly spread across nearly all social media platforms revealed a glaring gap in strategy or preparedness for monitoring among financial services organizations, resulting in confusion for advisors and frustration for their clients. As the dust settles from the GameStop saga, the reality of social media’s impact is left in its wake.

Social narratives can have a lasting effect on brands and organizations, and the financial services category is not immune to the conversations taking place online. Consider how Barstool Sports founder Dave Portnoy recently launched an exchange-traded fund (ETF) that uses artificial intelligence to determine what stocks to trade based on online chatter. Further solidifying social media’s role in the market, it’s no longer viable for financial institutions to ignore social media or the conversations happening there. Cyabra’s technology works to dive deeper, below the surface of online dialogue, to give clients in the financial sector invaluable data and analysis to make smarter decisions for their firms, and their clients.

Monitoring Social is Only Step One:

It’s not enough to simply identify these conversations happening online, but to dig deeper to uncover how these narratives are progressing and changing as it relates to an advisor’s book of business. Once you understand the conversation, advisors need to identify exactly how far the content has traveled, and which accounts of influence are giving additional attention and reach to the narrative. Cyabra’s platform will uncover the true reach of these narratives, while identifying fake accounts or bad actors further fueling content reach. Before decisions can be made, financial teams need an accurate picture of the conversation taking place.

Real-Time Analysis, Backed by Accurate Data is Key:

As you find the narratives online and its reach, Cyabra allows teams to stay ahead of trends, provide client counsel, and make changes to a portfolio as needed. By monitoring ticker symbols and client holdings across all social media platforms, in addition to identifying conversations that could impact market fluctuations, the real time analysis and data will give advisors the ability to stay ahead of the curve.

As social media continues to evolve and grow, businesses need to be ready to react to the conversations happening. Financial institutions can no longer ignore social media when managing their clients’ portfolios. With the right tools in hand, advisors can make smarter decisions that drive better business outcomes.

To learn more about Cyabra’s offerings for those in the financial services industry, or to request a demo of the technology, reach out to us here: https://cyabra.com/contact/

 

#BoycottSephora: A Case Study

How a Brand’s Image can be Destroyed in Seconds

On January 29, makeup influencer Amanda Ensing shared a sponsored video featuring Sephora products on Instagram. What next transpired was a classic example of a brand crisis created within seconds. Within minutes of her posts, social media users discovered previous posts of Ensing’s promoting the riots on capital hill. The hashtag #boycottsephora began to trend, with calls for Sephora to denounce Ensing. Upon Sephora doing so, the hashtag #boycottsephora flipped, criticizing Sephora for being anti-conservative, freedom of speech. Christians and Latinos, to name just a few.

What happened here is what we call a snowball effect– when just one post “snowballs” and spreading a message, creating great influence and impact.

Here’s what Cyabra unravelled:

Seconds after Ensing’s promotional post went live, followers discovered the following tweet in support of the Capitol Riots.

Immediately, calls to boycott Sephora trended with the hashtag #boycottsephora, demanding that the makeup company sever ties with Ensing.

Soon after, Sephora announced that they would no longer be working with Ensing, severing ties with her.

However, the damage was already done, with only more to follow. With Sephora’s decision to cut ties with Ensing, her supporters took over the #BoycottSephora hashtag on social media, completely reversing the narrative- this time the company was shamed for being anti-conservatives.

According to Cyabra’s analysis, the words most associated with the hashtag #boycottsephora were Maga, Christians and Conservatives.

After analyzing influential Instagram personas participating in the conversation, Cyabra identified Emily Sarmo, a fashion blogger, with a 17.2K following. Her post criticized Sephora for anti-freedom of speech sentiment and received nearly 500 likes. Instagram page “Latinos With Trump” was an influential voice as well, calling on followers to boycott Sephora for severing ties with Ensign.

To add more fire to the conversation, real and fake profiles were created just for the sake of this hot topic. An Instagram page titled “F*** Sephora” was created, citing that the company discriminates against Christians and conservatives.

 

View this post on Instagram

 

A post shared by @boycottsephora

Not only were there calls to boycott Sephora’s, but both side of the political aisle called on users to switch to Ulta, one of Sephora’s major competitors.

As demonstrated, with one video posted on Instagram, Sephora lost consumers across the political spectrum. Not only was there backlash with right and left followers, but apolitical posts criticized the company for taking sides and limiting the right for people to express their views. As we can see, posts like these snowball at an ever-increasing pace in today’s media network, irreversibly destroying brand names.

Vaccine Disinformation- What Does it Really Mean?

Vaccine Disinformation- What Does it Really Mean?

As the world begins to deploy COVID-19 vaccinations, the news is flooded with articles about the threat of vaccine disinformation. These articles claim that fake news and disinformation surrounding the vaccines is rampant and influencing the public’s view of the vaccine. However, what remains unclear is where this disinformation comes from, how it spreads and what the motivations are behind it?

We’ll tackle each of these questions step by step.

Social Media as a news source

Let’s first talk about the medium for where these conversations take place. As we know, social media today has significantly impacted the media landscape. The days of checking newspapers, local media outlets, or even simply google, are nearly obsolete for much of the public. Today, social media platforms, such as Facebook, Twitter and Instagram, have taken over news sources. This means that anyone on these platforms is now a source of news.

Social Media Users

Now, we can talk about who these sources are and their influence and reach.

There are the people that you know personally- your friends, family members and acquaintances. These may have similar outlooks as you- you likely have what in common. Oftentimes, we trust these sources- you know where they come from and what they represent.

Public figures are another category. These can be celebrities, politicians, business leaders or influencers. These figures can have the power to influence the decisions that we each make each day- this can be anything from political views to the shampoo that you should use.

We also have social media accounts used by media outlets that share their articles. While reliable media outlets are active on these platforms, non reliable sources have the ability to become just as active and influential.

Lastly, we have fake profiles. These include fake users, avatars, bots, sockpuppets and actual real people who are paid to promote specific agendas. These profiles are used to spread and amplify messages or information.

While fake profiles are seemingly the more obvious source of false information, the other categories can often be equally, if not more, dangerous. Public figures, influencers, friends and acquaintances are real people that are sources of any and all information- real or fake.

See where the problem begins?

Now that we understand the different sources of information, we need to understand what these sources are capable of.

Fake news, Disinformation and Misinformation

To further understand the problem, we need to define what fake news, disinformation and misinformation, a less popular phrase, really means. While these all fall under the category of false information, each phrase is distinct and carries important differentiations that are often missed.

Misinformation is false information that is spread deliberately or accidentally, regardless of the motivations behind it.

Disinformation is false information that is deliberately biased, fabricated or misleading. This can be manipulated narratives, facts or propaganda.

Lastly, fake news is false information that is purposefully created in a format that copies mainstream media to spread misinformation, conspiracy theories or hoaxes. Sometimes, these messages are not directly fabrications, but rather sensational, emotionally charged or misleading false information.

Each of these examples of false information pose threats. While some false information on social media can be spotted (that’s a conversation for another blog!), the more “trusted” a source is, the higher the risk

Types of Vaccine Disinformation

Now that we understand both the different categories of social media as well as the different types of false information, we can return to our original topic- vaccine disinformation.

As we know, one of the most globally discussed topics is the pandemic, creating an overabundance of information. This means that parties interested in this topic- and I believe it’s safe to say that this is a large percentage of the population- are susceptible to consuming false information.

As the pandemic conversation has turned to vaccines, we are witnessing an increase in the spread of false information. Some of this information is spread both by malicious social media users as well as altruistic users.

Here are some examples of the the reasoning behind the social media users spreading COVID-19 vaccine misinformation:

  1. Exploitation: People want to make money. By hooking readers into information about the vaccine, they can advertise their own agendas in connection to vaccines. For instance, Cyabra identified a campaign of fake profiles on Facebook that exploited the online interest in COVID-19 to advertise Bitcoin.
  1. Answers: When times are hard, people want answers. They look for comfort in reasons and explanations about what is happening. This is a feeding ground for conspiracy theories to thrive, which has led many to believe that the virus is a way for world leader’s to control their citizens.
  1. Community: People like to feel that they belong. They seek connections and by attaching themselves to an idea or opinion and engaging in social media conversations, they become a part of a community. Many social media users will seek out these communities to feel a purpose.

COVID-19 has brought out the vulnerability in all of us. Unfortunately, this leads to an environment where false information spreads, and spreads fast. And social media can be a hotbed for this.

The Dangers

Now that we understand what vaccine disinformation really means, it’s important to highlight its dangers. Put simply, mistrust and confusion causes vaccine hesitancy, leaving efforts to end the pandemic that has killed 1.85 million people in severe danger.

The infodemic is our latest public health threat, and we need to stop it.

Cyabra is actively monitoring the social media conversations surrounding COVID-19. Our latest report on vaccine disinformation details our findings.

Fake news spread on social media does real harm to our divided society | Opinion

By Scott Mortman, Senior Advisor at Cyabra

When President Trump announced on social media that he had been diagnosed positive for the coronavirus, a huge number of people on social-media questioned whether the news was real. To counter this “fake news” skepticism, the White House released a video and photos of the president taken at the hospital. In the past, this would have ended most suspicions. But in the rapidly developing world of “deepfakes,” many took to social media again to point out perceived discrepancies in how the president, his speech and his surroundings could have been manipulated.

Neither fake personas nor deepfakes — real people whose image, speech and/or surroundings have been altered — are creations of this century. They have been used to influence others in non-digital format since at least biblical times. What has changed is the medium used to spread these forms of disinformation.

Only in this century have we created a globally accessible platform to immediately transmit false information, disseminated by people who may not exist or appear as they are. And we are harmed by our inability to distinguish who is real or fake/deepfake, and to separate true narratives from false ones — as quickly and intensely as those spreading disinformation. The communication of agreed-upon facts forms the basis for any well-functioning society. Without a common narrative, divisions will widen.

In “The Social Dilemma,” a Netflix documentary, social-media engineers discuss how the technologies they created have been designed to optimize the time we spend online. By feeding us content and ads targeted to each user’s preferences, social media relies on artificial intelligence (AI) to reinforce these preferences. By eliminating posts and opinions that run counter to our presumed or expressed beliefs and likes, we enter into a “tech bubble” free of contradiction and conflict and in which our addiction to social-media content grows.

Yet our “social dilemma” is being exploited further. While those behind the social media platforms are working to feed our tech addiction, there now is a virtually uncountable — and unaccountable — number of fake users hijacking these technologies to push their targeted messaging to us. These fake online users have been created for one purpose — to manipulate and influence the opinions of real people online.

Regrettably, they perform quite well.

Recent studies conducted by the company I advise show that fake profiles increasingly constitute more than 30 percent of those engaged in online discussions. We also are tracking more active uses of deepfake technologies to manipulate opinion and cast doubt on our perceptions of reality.

Until now, the main purveyors of deepfakes have been pornography websites. As deepfake technologies advance, however, manipulated photos and videos of political candidates, government officials, business leaders and others in the public realm are increasing. Going forward, it is conceivable that the person with whom you are chatting on a live Zoom video conference may be a manipulated image engineered in real time.

If we are to overcome what separates us as a society, then we must find a better way to verify the truth of information shared online. As are radio and television, social media is a public forum that requires some measure of public oversight and protection. These platforms can benefit us and already inform too much of our daily lives to expect people to disconnect. At best, we may reduce our use of social media. We also may demand more of those who operate our social media.

Otherwise, as people have learned this year across the globe, what “goes viral” can harm us more than we imagine.

To view the original article, click here.