Annemarie Bridy
The challenge of keeping harmful and illegal content off the Internet is as old as the Internet itself. Meeting that challenge, however, has never felt as urgent as it feels now. And technology companies have never felt so pressured to figure out how to do it quickly, at scale. Facebook CEO Mark Zuckerberg recently assured members of Congress that advances in machine learning over the next few years will improve and more fully automate what he admits has been a deeply flawed process for removing banned content from Facebook. In an op-ed published in The Washington Post, Zuckerberg actually recommended federal legislation requiring online platforms to “build systems” that block unwanted speech.
Whereas Zuckerberg is relatively new to the filtering faith, the music and film industries have long extolled the virtues of enforcing copyrights online through automated content recognition (ACR) technology. For the better part of the last fifteen years, these industries have been arguing that the Digital Millennium Copyright Act’s reactive framework for removing infringing user-generated content (UGC) from online content-sharing platforms is woefully inadequate, and that such platforms should be required to deploy proactive “technical measures” for preventing copyright infringement. Music industry lobbyists point to YouTube’s voluntarily-implemented Content ID system as proof that filtering technology is available and affordable. If YouTube is already filtering, they argue, why not just make it a legal requirement for everyone?
The music industry views a statutory filtering mandate as the key to capturing revenue now lost in what they call the “value gap” between what YouTube pays to license copyrighted music and what on-demand streaming services like Spotify pay. In the United States and the European Union, YouTube and other UGC-sharing platforms have historically been protected by statutory safe harbors that insulate them from liability for users’ infringements, as long as they comply with rightholders’ takedown requests. Safe harbors give UGC-sharing platforms the legal cover they need to provide open forums for public expression.
Because safe harbors in their original form put the burden of monitoring for infringement on rightholders, YouTube has had no regulatory incentive to assume that burden. It has, however, had a business incentive to offer the music industry’s big players access to Content ID in return for licenses to popular content. The terms of those licenses have been negotiated in the shadow of the safe harbors and include what rightholders believe are unfair ad revenue splits for views of infringing UGC videos that rightholders use Content ID to monetize instead of blocking. At the end of the day, the music industry doesn’t want infringing UGC kept off YouTube. That would mean giving up a prime market that has so far been worth more than six billion dollars to them in ad revenue. What the music industry wants is a bigger share of YouTube’s pie, and it aims to get that by convincing policy makers to alter the regulatory incentives around monitoring for all services that allow users to publicly share content.
Now that streaming has supplanted paid downloads as the dominant format for digital music delivery, the music industry wants all platforms that stream copyrighted content to be treated equally under copyright law—regardless of the fact that dedicated music platforms like Spotify don’t host UGC at all and therefore don’t face the open-ended legal risk that safe harbors are designed to limit. Nor do closed platforms like Spotify offer the general public open-ended opportunities for self-expression and creative production. Because UGC platforms are open to all comers, they cannot possibly proactively license the entire universe of copyrighted content their users might ever upload. That’s why safe harbors exist, and why they have historically placed the burden of monitoring for infringements ex post on rightholders.
Wealthy tech giants like YouTube and Facebook can likely afford to bear the legal risk associated with narrowed safe harbors. And they can afford to bear the high cost of operating sophisticated ACR systems in terms of both technological and human resources. (In a 2018 report on the company’s anti-piracy efforts, Google said it has invested $100M in building and operating Content ID.) Emerging and smaller online businesses lack such resources, however. Constricting safe harbors through de facto or de jure monitoring obligations for platforms could therefore substantially limit dynamism at the Internet’s now highly concentrated edge, where consumers find themselves locked in to mega-platforms with few competitors. Copyright policy adjustments aimed at redistributing wealth from Big Tech to Big Music risk the unintended consequence of further entrenching the few that can “pay to play” under a tightened liability regime.
As the U.S. Copyright Office mulls recommending changes to the scope of the DMCA safe harbors, and EU member states prepare to transpose Article 17 (formerly Article 13) of the controversial Digital Single Market (DSM) Copyright Directive into domestic law, now is a good time to take a hard look at whether it makes sense to hardwire ACR systems like Content ID into copyright law through “notice-and-staydown” requirements.
Many urgent questions arise: Should the wide universe of UGC services that have flourished for two decades under the protection of safe harbors lose that protection so that Big Music can secure bigger payouts from Big Tech? Is the public’s interest served by changes to copyright law that could exponentially elevate operating risk for all online services that allow users to share content? Should copyright safe harbors be conditioned implicitly or explicitly on platforms’ implementing ACR systems? Are such systems accessible and sustainable for services that lack YouTube’s resources? Are ACR systems fit for purpose when it comes to protecting lawful expression, including fair use of copyrighted material? If not—and we do have ample evidence of their limits—how strongly should that militate against public policies requiring or encouraging broader deployment?
The public needs and deserves evidence-based answers to these questions before new laws favoring or requiring deployment of ACR systems are enacted. In Europe, these questions are now largely moot, given parliamentary approval of the DSM Copyright Directive. In the United States, however, the conversation about potential modifications to the DMCA is only getting started. It is seductive to look for technological solutions to content-related problems on massive platforms like Facebook and YouTube. Given the urgency and the scale of some of those problems, it is in the interest of both the platforms themselves and policy makers to put their faith in a quick technological fix. The public, however, should be skeptical, because the competitive and expressive costs of making UGC platforms filter everyone’s speech before it can be shared will be profound.
Annemarie Bridy is the Allan G. Shepard Professor of Law at the University of Idaho College of Law, Affiliate Scholar at the Stanford Law School Center for Internet and Society, and Affiliated Fellow at the Yale Law School Information Society Project. Professor Bridy specializes in intellectual property and information law, with specific attention to the impact of new technologies on existing legal frameworks for the protection of intellectual property and the enforcement of intellectual property rights. You can reach her by e-mail at abridy at uidaho.edu.