Introducing botbusters.ai: detect AI-generated texts, images, and fake profiles. Click here

Fake News, Real War: the Disinformation Frontline

As disinformation has been part of our lives for so long, once in a while it’s important we take a step back from the mountains of research and theories, take a look at the key players involved in what has become the cold war of the 21st century, and understand who’s winning the fight at this very moment. 

Who Cares About Disinformation? 

In the world of disinformation, there are four key stakeholders:

  1. Active consumers are increasingly aware of the problem enough to be concerned. In recent years, many consumers are adapting to the changes and educating themselves: they don’t only think carefully before sharing, they make sure to verify the information that reaches them using online tools and sites as well as reliable sources. However, the scale of disinformation reaching social media platforms is proving a challenge for consumers who can unwittingly also become spreaders of misinformation.
  2. Social media platforms themselves – while in the past platforms were legally obligated to verify the content spread on them, since 1996 they are protected under section 230 of the Communications Decency Act, which means they are immune from liability for third-party content generated by their users. A provider of a social media platform is therefore not treated as the publisher or speaker of any information provided by another information content provider. The Law dates back to a time when there was little to no social media, and many voices are calling for it to be reformed and rewritten in a way that will impose civil liability on the platforms without hurting free speech.
  3. Regulators – as mentioned, social media is protected in the US under section 230, which means the only way to change it is through regulators – which in the US means Congress. In recent years, public efforts have been made to put pressure on Congress to act, but as we all know, changes on the regulator side can take a while to start, and even longer to end.
  4. Social threat intelligence companies, a growing ecosystem of organizations all with the mission to develop the technology to detect inauthentic activity, as well as identifying, uncovering, and taking down fake profiles.

Are We Losing the Fight Against Disinformation?

Despite the growing awareness, it’s important to understand that the bad actors’ ability to spread disinformation is constantly improving and becoming more effective. And obviously, since social media encourages international discourse, profiles all across the globe have the ability to affect a national or even local issue. 

Nowadays, it takes far fewer fake users to effectively influence a great number of real ones. In the past, swaying the conversation required 20% to 35% fake profiles. Now, with the progress of AI in general and particularly the “intelligence” of bot networks, it doesn’t require more than 5%-15% to create a real impact and affect real people. Even if those people don’t participate in spreading the fake information, they are being manipulated just by being exposed to it. 5% fake profiles can impact millions of people in the conversation. 

There is good news, too: technologies monitoring, detecting, and reporting fake news and fake profiles are constantly improving. While in the past industry standards could determine inauthentic profiles with about 80%-85% accuracy and confidence, Cyabra’s AI model has a confidence level of over 90% when determining whether a profile is fake. 

However, while AI with the ability to detect fake profiles keep improving and adapting, those technologies are also reactive in nature, and must study new developments in this field (Deepfakes, for example) before being able to identify and uncover them. Tracking the source is also a real issue, and not always achievable – even if you uncovered the fake profile that started the wave of disinformation, nothing guarantees you the knowledge of its whereabouts in the real world. 

There is another encouraging change related to awareness: so far, we’ve mentioned the growing awareness on the consumer side, and we know that the public sector has been well aware of the problem for years and is regarding disinformation with gravity and severity. In recent years, though, the private sector has entered the game, becoming more conscious of the impact of disinformation. Companies and corporations are already starting to understand that avoiding damage to their reputation and income doesn’t begin and end with tracking negative sentiment, but with analyzing the profiles spreading this sentiment and identifying whether they are real or fake. 


Protect Your Company From Risks

Disinformation is an ever growing field of study. Understanding disinformation is understanding its language and terms, the different types of false information, the bad actors, why it is created and who it benefits, how to protect yourself against it and how to protect everyone else. If you’re a company or corporate looking to protect your brand, reputation and people from the threats of disinformation, contact Cyabra to set up a demo.