Matt Carlson
For the Unlocking the Black Box conference -- April 2, 2016 at Yale Law School
It is either ironic or proving a point that my first step in
writing a paper about algorithms was to consult the algorithms of Google Scholar. Nonetheless,
serendipitous things happen when you take a deep dive into repositories of academic
work. In this instance, my search turned up a computer science paper with a
rather instrumental title, “A
front-page news-selection algorithm based on topic modelling using raw text.” This
was not at all surprising; searching for research on algorithms regularly
results in a mishmash of material arrayed across a spectrum marked by sociological
studies comprising what Tarleton
Gillespie and Nick Seaver call “critical algorithm studies” on one end and hosts of technical studies focused
on the construction and operation of algorithms on the other.
This particular article fell in
the latter camp, but luckily I stopped to read it. I can’t speak to its
technical value or procedures, but it was the opening paragraph that caught my
attention:
The front page of a news aggregator,
like Google News or Yahoo! News, is the showcase where readers expect to see
significant news articles. With human-editor-based news aggregators, the burden
of reading several news articles and selecting important ones is a challenging
task. Editors may select worthless news unintentionally, or even according to
their own points of view. As a result, intelligent algorithms that allow news
aggregators to process news and select significant ones, need to be developed.
On the surface, this is a vague statement used to justify the
subsequent development of an algorithm taking a unique approach to sorting
stories into a finite list for the front-page of a hypothetical news site. It
is hardly a full-blown argument, let alone a manifesto. But this paragraph is
also pregnant with assumptions about what journalism is, how it works, and how
algorithms can be introduced to make it work better. In this sense, it is an
ideology, a way of abstracting the world and formulating a particular set of values
that in turn drives concordant actions. And to the extent that it is expressed
so unproblematically and definitively, it deserves a second look.
If we take a broader view, it becomes clear that the issue at
hand is really one involving professional judgment. Professional authority
entails the ability to control knowledge in particular domains. Without the
blunt powers associated with the state, professional authority must rely on
legitimating this knowledge as socially beneficial. Doing so allows professions
to mark off particular social spaces, with the promise that this control is
backed by a sense of social responsibility.
For journalists, this schematic of professional authority quickly
runs into problems. Although journalists possess skills and expertise, their
output – the news – is not the esoteric professional knowledge associated with,
say, medicine, but a prosaic discourse tasked with being understandable to
large swaths of society. News language is largely ordinary language. Faced with
this dissimilarity, journalists justify their authority through their skill in
making judgments about what is important. News stories are stylized retellings
of events in the world while news products – a broadcast, a newspaper, a Web
site – are carefully ordered assemblages of texts that give meaning to the
world. All of this rests on claims that journalists know what’s important and
should be trusted with these decisions.
What my paper for this conference explores is how the
growing use of algorithms complicates this function – if not supplants it
entirely. Algorithms that select news, from recommendation engines to the algorithms
that construct the newsfeed of a social media site, render their own judgment
of what’s important, often idiosyncratically based on audience behavior. More
recently, software combining natural language processing and artificial
intelligence makes it possible for machines to write
stories themselves, diminishing the human to the role of
the initial programmer.
This is all made more complicated by the resilience of
objectivity as a guiding norm for journalists. Even as objectivity as an
absolute ideal has lost its luster, it continues to provide an argument
validating a way of approaching the news as a legitimate form of social
knowledge. Algorithms muddle this argument in that they too are attached to a
form of mechanical objectivity presumed to be impervious to the subjective
faults of human beings. Consider again the opening paragraph pasted above: the
authors identify human failings that can be replaced by a computer for the
benefit of all. They even see it as an imperative that this happen. This is a
problematic assumption in many ways, but it is also a powerful technocratic
one.
This tension returns us to professional judgment. Journalism
is not a mechanical craft but a creative and variable one. Journalists must
make decisions about what the public should know. The ample scrutiny journalists
receive may be taken as evidence that journalists often make flawed judgments—depending
on the subjective views of critics. But this discourse is also an indication
that these judgments matter and are worth contesting.
In the end, when we think about algorithmic accountability,
we need to be sure we are looking not just at the practices that are
developing, but also at how these practices are reflected in broader discourses
through which we think about what constitutes authoritative knowledge in society.
The replacement of the news judgment of a human by an algorithm, to put it most
bluntly, indicates larger fissures in how we recognize what legitimate
knowledge practices look like, and how we think they should look. These
developments have become visible with journalism, but this issue increasingly
extends across the professions and into larger questions of who makes knowledge.
Matt
Carlson is associate professor of communication at Saint Louis University. He
can be reached at mcarls10 at slu.edu