E-mail:
Jack Balkin: jackbalkin at yahoo.com
Bruce Ackerman bruce.ackerman at yale.edu
Ian Ayres ian.ayres at yale.edu
Corey Brettschneider corey_brettschneider at brown.edu
Mary Dudziak mary.l.dudziak at emory.edu
Joey Fishkin joey.fishkin at gmail.com
Heather Gerken heather.gerken at yale.edu
Abbe Gluck abbe.gluck at yale.edu
Mark Graber mgraber at law.umaryland.edu
Stephen Griffin sgriffin at tulane.edu
Jonathan Hafetz jonathan.hafetz at shu.edu
Jeremy Kessler jkessler at law.columbia.edu
Andrew Koppelman akoppelman at law.northwestern.edu
Marty Lederman msl46 at law.georgetown.edu
Sanford Levinson slevinson at law.utexas.edu
David Luban david.luban at gmail.com
Gerard Magliocca gmaglioc at iupui.edu
Jason Mazzone mazzonej at illinois.edu
Linda McClain lmcclain at bu.edu
John Mikhail mikhail at law.georgetown.edu
Frank Pasquale pasquale.frank at gmail.com
Nate Persily npersily at gmail.com
Michael Stokes Paulsen michaelstokespaulsen at gmail.com
Deborah Pearlstein dpearlst at yu.edu
Rick Pildes rick.pildes at nyu.edu
David Pozen dpozen at law.columbia.edu
Richard Primus raprimus at umich.edu
K. Sabeel Rahmansabeel.rahman at brooklaw.edu
Alice Ristroph alice.ristroph at shu.edu
Neil Siegel siegel at law.duke.edu
David Super david.super at law.georgetown.edu
Brian Tamanaha btamanaha at wulaw.wustl.edu
Nelson Tebbe nelson.tebbe at brooklaw.edu
Mark Tushnet mtushnet at law.harvard.edu
Adam Winkler winkler at ucla.edu
One can hardly talk about disinformation and
Europe without mentioning Brexit and the Cambridge Analytica scandal. However,
the last four years have shown that blaming only technology for the information
disorder is illusory. After Brexit, Europe feared disinformation campaigns
would affect the French and the German elections in 2017. Until today there’s
no evidence of such, either from foreign actors (e.g., so-called troll armies)
or as a result of content curation systems on platforms. For instance, the 2017
German elections saw hardly any manipulation from abroad, and the influence of
personalization by algorithms in search engine results was negligible, as
Cornelius Puschmann wrote in “Beyond the Bubble: Assessing the Diversity of Political Search
Results.” Similarly, false news sites had no significant influence on the 2017
French elections, researchers found. On the contrary, research shows that most of the issues are man-made and partly
based on a failure of traditional media. Nevertheless, rumors concerning
then-presidential candidate Emmanuel Macron perturbed the election campaign
period and sparked off a regulatory reaction.
Indeed, France was the first European country to
pass a law against “information manipulation,” targeting the
dissemination of false information for the purpose of election rigging. Under
this law, “any allegation or inaccurate or misleading attribution of a fact
which could affect the truthfulness of the forthcoming elections, which is
intentionally, artificially or automatically disseminated on a large scale via
an online public communication service” is prohibited. The law thus targets
false statements of facts that could reduce the credibility of the election
results. The application of the law is limited in time to the three months
preceding an election, including the day of the election. Against statements
that violate the law, urgent preliminary ruling procedures can be initiated in
court and receive a preliminary ruling within 48 hours. The court may decide on
all proportionate and necessary measures to prevent the dissemination of false
information, including blocking, deleting, and/or not disseminating the
content. The law also includes higher transparency requirements for political
advertising on social media platforms.
While this law might prevent information
manipulation during future French elections, it remains unclear how to assess
this kind of regulatory initiative with regard to its efficiency and its
proportionality. Taking regulatory action without disproportionately
restricting fundamental rights such as freedom of opinion or freedom of the
press is extremely complicated, precisely because the phenomenon is so diffuse
and complex. Digital services can be used to try to influence people, and
automation can simplify and enhance this process. But the actual effect of
automation on election results, whether real or fake, cannot be measured. In
fact, a recent report by the UK Information Commissioner’s Office
invalidates many accusations made against Cambridge Analytica two years ago.
Cambridge Analytica’s prediction model wasn’t as efficient as presumed and, as
researchers pointed out before, political microtargeting existed before social media platforms.
At the supranational level, the EU so far chose
self-regulatory forms of governance to address the issue. In September 2018,
the EU and leading tech companies agreed upon the Code of Practice on Disinformation. The
CPD, as it’s known, defines disinformation as “verifiably false or misleading
information” which, cumulatively, (a) “is created, presented and disseminated
for economic gain or to intentionally deceive the public”; and (b) “may cause
public harm,” intended as “threats to democratic political and policymaking
processes as well as public goods such as the protection of EU citizens’
health, the environment or security.” Tech companies can choose if and how they
comply with the CPD — hence, to which extent they will go beyond their own set
of rules. Ultimately, the platforms still govern the evaluation and interpretation of what type
of false information might be harmful to democracy. As one might expect, the
CPD was no game changer in this area.
Currently,
the EU Commission is preparing several drafts of regulatory projects which
could potentially include measures against disinformation on social media
platforms. The Digital Services Act, a draft of which will be revealed on Dec. 15,
will most probably include new rules regarding content curation and
transparency obligations.
The EU Democracy Action Plan, published on Dec. 3,
2020, ought “to ensure that citizens are able to participate in the democratic
system through informed decision-making free from unlawful interference and
manipulation.” With regard to the role of online platforms, the DAP includes
six objectives:
“1. monitoring the impact of disinformation and
the effectiveness of platforms’ policies, 2. supporting adequate visibility of
reliable information of public interest and maintaining a plurality of views,
3. reducing the monetization of disinformation linked to sponsored content, 4.
stepping up fact-checking, 5. developing appropriate measures to limit the
artificial amplification of disinformation campaigns, and 6. ensuring an
effective data disclosure for research on disinformation.”
It
remains to be seen what concrete measures will be taken to achieve these
objectives, but it is already apparent that it could have a significant impact,
including beyond EU borders. Last but not least, the EU Data Governance Act will likely introduce a fiduciary duty of the
data intermediary with regard to the data subject.
In sum,
the EU now focuses on two types of countermeasures: a significant shift to
procedural measures instead of targeting specific types of speech and, in
parallel, a stronger protection of the digital public sphere by strengthening
both the users as individuals and traditional media outlets as trusted
conveyors of news.
The
latter goal is also pursued by Germany with its new State Treaty on Media, under which search engines and social networks
have to mark social bots and must clarify the basic principles of their
selection and sorting, in a user-friendly and understandable way. Given the complexity of the algorithmic selection procedures used, it
remains unclear whether this duty will be practicable. According to section 94
of the State Treaty on Media, “to ensure the diversity of opinion, media
intermediaries must not discriminate against journalistically and editorially
designed contents on whose perceptibility they have a particularly high
influence.” This provision raises many questions as to its practicability and
its plausibility, mainly because intermediaries convey a wide variety of very
diverse content, each of which is of varying relevance to the formation of
opinion.
Regulating automated speech raises questions: To
what extent is the law required to observe proxy freedoms for automated agents
such as social bots? Over the past months, we’ve seen a rise of misinformation
and conspiracy theories amplified by recommender systems, sometimes leading to
real-life violence. But the other part of the story is that most conspiracy
theorists succumb to human pied pipers, even if distributed in YouTube or
Telegram channels. In theory, algorithms in recommender systems could contribute to a more diverse and balanced online environment and
therefore be part of the solution, provided they meet substantial transparency
requirements.
All in all, it’s an ongoing iterative process,
and regulators on both sides of the Atlantic are struggling to find adequate
responses to the issues raised. Eventually, they need to take action because
the stakes are high, but there’s no single-sized solution because the roots are
many-sided (as in the U.S.). It is therefore not sufficient to target only
“the algorithms,” instead of taking a broader view, including institutional and
political dynamics. Moreover, algorithmic content curation poses many
challenges because of the risk of violating freedom of expression.
Content-based approaches are most likely unconstitutional because it seems
impossible to identify manipulative or dangerous political speech only. In the
end, higher exposure to pluralist and diverse media sources at the recipients’
level might reduce the risk of misinformation and disinformation, and be the
most moderate approach with regard to autonomy and democracy considerations.
Amélie P. Heldt is a junior legal researcher and doctoral candidate at the Leibniz-Institute for Media Research | Hans-Bredow-Institut.