Computers, Privacy & the Constitution
Ready for review. Comments welcome.

The Evolving Problem of DMCA Takedowns

-- By BrianS - 24 Mar 2010

Introduction: Section 512(c)

The DMCA's takedown provision, 17 U.S.C. section 512(c), significantly modified the playing field for content owners, creators, and hosts. Under section 512(c), rightholders can issue notifications to ISPs requesting that they remove user-uploaded content on the ISP's site because it allegedly violates U.S. copyright law. See, e.g., ChillingEffect's DMCA page and FAQ. This essay examines the actual use of section 512(c) and the growing problem of the automatic takedown filter.

The Hall of "Fame"

Since section 512(c)'s enactment, thousands of takedown notices have been issued. Many of those notices were issued reasonably, in good faith, and in compliance with section 512(c)'s intended goals. Others, however, were not.

For example, after blogger Perez Hilton issued a video criticizing Miss California Carrie Prejean for being opposed to same sex marriages, the Natioanl Organization for Marriage used a few seconds of his video in a response ad. Perez then issued a takedown notice alleging copyright infringement for what was clearly a fair use. As others have noted, this was an abuse of the DMCA process. Nevertheless, it knocked the clip off YouTube temporarily.

Similarly, NPR issued a takedown notice when an anti-same sex marriage group used 21 seconds of an NPR show in its political ad. Fair use? Yes. Removal from YouTube nevertheless? Yes.

And in a third example, after the National Organization for Marriage made a video depicting the rise in gay rights as a coming storm, a second organization obtained and released clips of the rather horrific auditions for NOM's ad. Rachel Maddow played some of those auditions (~two minutes into the link) on her show to make the comedic point that "pretending to be a straight person hurt by gay marriage [] is apparently very, very challenging." In response, of course, NOM issued a takedown notice. MSNBC's use was clearly a fair use and NOM's takedown thus an abuse of the DMCA. Still, the content was removed.

These are not the only examples. Indeed, some studies have suggested that 30% of takedowns present substantive flaws suggesting they seek removal of noninfringing works.

The Unblinking Eye

Processing takedown notices costs service providers time and money. There is also continuous pressure on ISPs to "reasonably implement[] . . . a policy that provides for the termination [of access to the provider's services] . . . for repeat [copyright] infringers." See, e.g., 17 U.S.C. s. 512(i)(1)(A) (stating that service providers that fail to do so lose their section 512 safe harbor). Further, service providers face pressure to implement tools allowing rightsholders to do less of the heavy lifting on policing content.

The end product? Automatic takedown filters like YouTube's ContentID. ContentID matches audio or video files provided by rightholders against user-uploaded content. When rightholders sign-up to use ContentID, the rightholder instructs the program what it should do when it finds a match: "monetize, track, or block" the content. Once the rightholder has done so, ContentID is "Fully Automated. Once you're set up, Audio ID and Video ID identify, claim, and apply policies to YouTube videos for you."

In other words, tools like ContentID are fully-automated DMCA takedown clones. If a rightholder doesn't want any incarnations of its works on YouTube then so it shall be; ContentID will act as a gateway, stopping user-generated content from ever appearing on the site.

Going Forward: The Problem

The problem is entirely too many non-infringing videos removed. It is an end product of automated processes--currently incapable of assessing fair use--blocking content indiscriminately. With ContentID, there is no human element once the process is initiated. Even if your use is a transparently fair use, ContentID's unblinking digital eye cannot tell and it silences you nevertheless.

The problem is also that abusive takedowns are nearly sanctionless. By sending a notice, the video is taken down and perhaps put back up days later if a counternotice is filed. Some removed content will never be revived. Some users will be discouraged from future digital expression. This is an especially high price to pay when the notices are misused, yet few sanctions are possible. Section 512(f) provides that "any person who knowingly materially misrepresents under this section- that material or activity is infringing . . . shall be liable for any damages, including costs and attorneys' fees . . . ." In some courts, however, the only way the user whose content was removed can recover is if he or she can show "subjective bad faith" on the part of the notice issuer. Such a standard is hardly a check on abuse.

Finally, it is little comfort that the user has post-abuse self-help options. If the user takes the removed-material elsewhere they still face the risk of abusive takedown notices. If the user challenges the ContentID automatic filtration, the rightholder must manually review it but can still send an actual DMCA takedown notice and silence the clip for days. The dual threat of abusive DMCA takedown notices and automatic filters create a substantially user-unfriendly atmosphere for the expressive culture of this generation and those to come.

Conclusion: The Solution

The solution must address both the state and private party components of the problem. On the state side, 512(c) should be amended to embody fair use principles. Content filters, however, are not state-made and thus may need to be addressed separately (unless revising the DMCA persuades service providers to rethink content filters). Perhaps the best tool for that job is a dramatic increase in public advocacy.

The Internet and digital culture are robust, but they are not immune to censorship. The digital culture is here to stay, and if we allow programs like ContentID in its current incarnation to set the gold standard we chart a path to significantly less expressive freedom. If we insist on retaining a baseline highly-restrictive copyright model, we should at least find a more balanced, middle road online. Section 512 and automatic filters are proving to be no such road.


Great article. I've honestly always wondered why clips that I thought were obviously fair use of copyrighted material were constantly being taken down on YouTube? . Now I know!

I'm kind of shocked that the fair use principles aren't already embodied in 512(c). Do you have any idea if this was just an oversight on the part of the drafters of if it was intentionally excluded?

As you noted, the "subjective bad faith standard" is an incredibly hard thing to prove. Does that the fact that a rightsholder who first resorted to private things methods such as ContentID? , was informed by YouTube? that the content was fair use, and then subsequently filed a DCMA claim make bad faith any easier to prove?

-- EdwardBontkowski - 20 May 2010


Thanks for your comments. As to fair use, I don't believe it was an oversight, no. That's just the DMCA for you.

As for the bad faith element, I don't recall off-hand any cases discussing the interplay of facts like those you proposed. YouTube wouldn't traditionally make a determination whether something was fair use but if YouTube had for some reason done so and communicated it to the rightholder (before the takedown notice was filed) I would think that fact would be relevant to the user's allegation of bad faith.

-- BrianS - 20 May 2010


# * Set ALLOWTOPICVIEW = TWikiAdminGroup, BrianS


Webs Webs

r12 - 17 Jan 2012 - 17:48:21 - IanSullivan
This site is powered by the TWiki collaboration platform.
All material on this collaboration platform is the property of the contributing authors.
All material marked as authored by Eben Moglen is available under the license terms CC-BY-SA version 4.
Syndicate this site RSSATOM