Update your Cookie Settings to use this feature.
Click 'Allow All' or just activate the 'Targeting Cookies'
By continuing you accept Avaaz's Privacy Policy which explains how your data can be used and how it is secured.
Got it

Legislative Principles for tackling Disinformation

Posted: May 2019
Europeans were flooded with disinformation ahead of the EU elections. Avaaz uncovered disinformation networks in France, UK, Germany, Spain, Italy and Poland, posting content that was viewed over 760 million times over the past three months, before finally being removed by Facebook after the Avaaz team flagged the pages to them.

Avaaz’s analysis focused on just 6 countries in the last 3 months, looking only on Facebook, and found the networks' content had been viewed three quarters of a billion times. Yet these extraordinary numbers are just the tip of the iceberg -- the spread of disinformation across the continent is likely much wider.

It is clear that the Code of Practice coordinated by the Commission has failed to achieve their purpose: protecting Europeans from disinformation.

Current Commissioners must urgently prepare the groundwork for strong EU regulations to defeat disinformation and protect democracy. It is up to them to ensure that the next Commission is empowered to tackle this threat quickly.

Avaaz has developed 5 legislative principles for the EU - principles that should be at the foundation of any legislation to fight disinformation. You can find them below, along with our report:

5 Legislative Principles

Avaaz has developed a comprehensive regulatory proposal based on the principles of transparency , responsibility , and freedom . We’ve consulted deeply with academics, lawmakers, civil society, and social media executives to arrive at 5 legislative principles that must inform any democratic effort at legislation:
  • Correct the Record
    Correct the Record exposes disinformation and other manipulations, educates the public, and reduces belief in lies. The policy requires platforms to inform users and push effective corrections to every person who saw information that independent fact checkers have determined to be disinformation. This solution would tackle disinformation while preserving freedom of expression; as Correct the Record adds the truth but leaves the lies alone. Newspapers publish corrections right on their own pages, television stations on their own airwaves; platforms should provide the same service to their users.

    Research commissioned by Avaaz and conducted by leading experts proves that providing corrections to social media users who have seen false or misleading information can decrease belief in disinformation by half. Multiple other peer-reviewed studies have demonstrated that effective corrections can reduce and even eliminate the effects of disinformation. Studies attempting to replicate the often discussed “backfire effect” - where corrections entrenched false beliefs - have instead found the opposite to be true. Meanwhile, researchers are converging best practices for effective corrections.

    In our view, correcting the record would be a five-step process:
    1. Define: The obligation to correct the record would be triggered where:
      • Independent fact-checkers verify that content is false or misleading;
      • A significant number of people -- e.g. 10,000 -- viewed the content.
    2. Detect: platforms must:
      • Deploy an accessible and prominent mechanism for users to report disinformation;
      • Give independent fact-checkers access to content that has reached, e.g., 10,000 or more people.
    3. Verify: Platforms must work with independent, third-party verified fact-checkers to determine whether reported content is disinformation.
    4. Alert: Each user exposed to verified disinformation should be notified using the platform’s most visible and effective notification standard.
    5. Correct: Each user exposed to disinformation should receive a correction that is of at least equal prominence to the original content, and that follows best practices which could include:
      • Offering reasoned alternative explanation, keeping the users' worldview in mind;
      • Emphasizing factual information while avoiding, whenever possible, repeating the original misinformation;
      • Securing endorsement by a media outlet or public figure the user is likely to trust.
  • Detox the Algorithm 
    Social media companies’ ‘curation algorithms’ decide what we see, and in what order, when we log on. They’re designed to keep us glued to the screen and always wanting to come back for more. They succeed in part by pushing emotionally charged, outrageous and polarizing content to the top of our feeds. That’s one of the big ways hatred , disinformation , and calls to political violence go viral.
    Fortunately, this can be fixed. Having designed and developed them, platforms can Detox their Algorithms by making sure known disinformation is downgraded, not amplified, in our feeds, demonetizing disinformation, and being transparent with their users by using alerts.

    Facebook’s own research shows that slowing the spread of disinformation can reduce views by up to 80% . But this solution is not being rolled out at scale at Facebook or other major social media platforms.
    Research shows that curation algorithms can lead ‘regular people’ to extremism . An internal report at Facebook in 2016 revealed that 64% of users who joined an extremist group on its platform only did so because the algorithm recommended the groups. One study demonstrates that YouTube’s recommendations prioritizes extreme right-wing material after interaction with similar content.

    Platforms are aware of the problem -- reporting suggests that Facebook knows that its algorithms “ exploit the brain’s attraction to divisiveness ”, and that, if left unchecked, they will “feed users more and more divisive content to increase user attention and time on the platform.”

    Three Steps to Stop the Spread and Detox the Algorithms:

    1.  Detect and downgrade known pieces of misinformation and all content from systematic spreaders.  All platforms should stop accelerating any content that’s been debunked by independent fact-checkers, as well as all content from pages, groups, or channels that systematically spread misinformation.

    2. Demonetize systematic spreaders. When an actor has been found to be systematically posting fact-checked content, the platforms must ban these actors from advertising and from monetizing their content.

    3. Inform users and keep them safe. Users should be informed through clear labels when they’re viewing or interacting with content from actors who were found to be repeatedly and systematically spreading misinformation, and be provided with links to additional information.

    Detox the Algorithm protects free speech by requiring that all content remains available and guarantees users due process -- the right to be notified and to appeal the platforms’ decisions. It also protects freedom of thought by slowing the spread of harmful lies that change how our brains are wired.
  • Ban Fake Accounts and unlabeled bots
    Fake accounts and unlabelled bots act as conduits for disinformation and harm voters in precisely the same way that misleading advertising and unfair business practices harm consumers. They must therefore be mandatorily banned on all platforms. Many platforms’ guidelines and policies already include this ban, but they are underperforming on actively searching for fake accounts, closing the loopholes that allow them to multiply and reducing the incentives provided by their own services that favor the existence of bots.

    Bots must be prominently and effectively labelled, and all content distributed by bots must prominently include the label and retain said label when the content or message is shared, forwarded, or passed along in any other manner. All such labels must be presented and formatted in a way that it is immediately clear to any user that they are interacting with a non-human.

    In summary, platforms must ban fake accounts and unlabelled bots that act as conduits for disinformation and take action to track down the networks that create and run them, closing the loopholes that allow them to multiply, and reducing the incentives provided by their own services.
  • Label paid content and disclose targeting
    Transparency regarding financial compensation should apply to all paid communications, political and non-political. Citizens ought to be able to know who paid for any advertisement and on what basis the viewer was targeted. In order to protect citizens from disinformation warfare, these standards of transparency should apply to all paid content -- not just political advertising. Additionally, platforms must label state-sponsored content (or propaganda) in order to increase transparency by disclosing where and how that content creation was paid for and who created it.
  • Transparency
    In the evolving defence of our democracies against disinformation, it is essential that governments, civil society, and the general public be informed about the nature and scale of the threat, and about the measures being taken to guard against it. Online platforms must therefore provide comprehensive periodic reports listing - aggregated by country and/or language - the disinformation found on their services, the number of bots and inauthentic accounts that were detected, what actions were taken against those accounts and bots, how many times users reported disinformation. The reports must also detail platforms’ efforts to deal with disinformation in order to make the nature and scale of the threat public.

Read the full report

Avaaz Report Network Deception Download

REPORT: FAR RIGHT NETWORKS OF DECEPTION

Ahead of the EU elections, Avaaz conducted a Europe-wide investigation into networks of disinformation on Facebook. This was the first investigation of its kind and uncovered that far-right and anti-EU groups were weaponizing social media at scale to spread false and hateful content. Our findings were shared with Facebook, and resulted in an unprecedented shut down of Facebook pages just before voters headed to the polls.