Policy Studies Technology and Innovation

Jawboning in Plain Sight: The Unconstitutional Censorship Tolerated by the DMCA

The DMCA should be amended to remove its coercive qualities so that providers, and the user expression of speech they facilitate, are no longer vulnerable to government-enabled censoring.

Section 512 of the Digital Millennium Copyright Act (DMCA) provides critically needed liability protection for the providers of intermediary services that the internet depends on. But that protection can come at the cost of the very user expression these services exist to intermediate because of the jawboning pressure baked into the statute. Such an unconstitutional effect is not inevitable, however. This paper explains how the DMCA could be transformed from its current state as a censor to the speech-facilitating, intermediary-protecting law it was intended to be.

In the summer of 2024, a group of record labels led by Universal Music Group sued Verizon, as an internet service provider (ISP), for contributory and vicarious copyright infringement arising from how individuals had used its services.[1] The plaintiffs’ theory of liability was premised on the accusation that Verizon had failed to terminate users alleged to have engaged in illegal filesharing.[2] This lawsuit echoed several earlier lawsuits brought by large corporate copyright owners against other major ISPs. The first such lawsuit was BMG v. Cox, which opened the floodgates for this sort of litigation against third-party intermediaries, especially those that, like Cox and Verizon, were broadband ISPs.[3]

At issue in each of these cases was Section 512 of Title 17 of the Digital Millennium Copyright Act (DMCA).[4] This addition to the copyright statute created a system of safe harbors from which third party Internet intermediaries of various sorts could benefit. With these safe harbors, intermediaries would not have to worry about taking on liability for copyright infringement based on how others had used their systems and services to convey potentially infringing expression as long as they complied with the safe harbors’ various conditions.[5]

The problem, however, is that the conditions that third-party intermediaries have to meet to benefit from these safe harbors include the removal of certain user speech, or even, as the courts have outlined, speakers themselves.[6] Worse, as these cases have made clear, the DMCA requires the intermediaries to apply this censorship without there ever being a judicial finding that the speech or speaker activity was even wrongful.[7] Mere accusation is enough to force the intermediaries to take action, lest they lose access to the protection of the applicable safe harbor and find themselves staring down a potentially expensive infringement lawsuit. Thus, the safe-harbor system creates a situation where non-wrongful and constitutionally protected speech ends up being removed by the intermediary service it was expressed on because the law has established a mechanism to penalize these services if they do not.

But, as the United States Supreme Court made clear in another 2024 case, NRA v. Vullo, the First Amendment is not designed to be circumvented in this way. Legal compulsion should not be able to force the suppression of speech that could not be targeted by law directly by instead applying that compulsion to a third party on whom the speaker depends.[8] This sort of constitutional end-run in which speech suffers sanction thanks to intermediary pressure is referred to as “jawboning” and amounts to unconstitutional censorship.[9] 

Jawboning is now getting new attention in the digital space. For instance, in addition to the Vullo case, the Supreme Court also decided Murthy v. Missouri and Moody v. NetChoice in 2024, each of which dealt with government pressure on internet platforms (generally referred to alternatively here as “intermediaries” or “providers”) as a means of affecting what speech could appear online.[10] But the DMCA has been working this way for more than one-quarter of a century. However, the fact that this sort of censorship via intermediary pressure has been happening for so long does not mean that there is not a significant constitutional problem; rather, it means that it is time to take notice.

The term “jawbone” has existed for a while, meaning something along the lines of “to chat away.” Over time, the term became synonymous with securing credit.[11] By the 1960s, the term evolved even further to imply that the talking was used as a means of lecturing or hectoring to persuade individuals, and came to describe a behavior presidents and administration officials of that era used to affect policy outcomes in a way that might have been beyond their authority to directly cause, particularly in an economic context.[12] Some argue that the term has a biblical connotation, referencing the story of Samson killing 1000 men with a donkey’s jawbone, to describe the unexpected power of such an otherwise slight weapon.[13] The true etymology appears lost to time, but its contemporary usage, particularly in the internet context, refers to state actors leveraging regulation to pressure a third party to indirectly punish speech that the state actor could not directly censor constitutionally.[14] While some consider this jawboning pressure to be informal, it is not always.[15] The issue with jawboning is that government pressure can compel an intermediary to act in a certain way to achieve the regulatory result the government actor sought but that was beyond its authority to directly compel because of the First Amendment.[16] 

Such pressure is unconstitutional, as the Vullo decision explains. In that case, a New York state official displeased with the National Rifle Association (NRA) wanted to censor its expression.[17] Because the First Amendment does not allow speakers to be punished for their speech, the official instead turned their state power toward the insurance market they regulated.[18] The Court found that while the state official was always free to exercise their own enforcement power when warranted, they were not free to do what they did here and, as a means of punishing the NRA, threaten the insurance companies with enforcement actions if they did not stop doing business with the NRA. In this instance, the official’s actions violated the NRA’s First Amendment right by exploiting its dependency on the insurance companies and applying pressure on them so they would, in turn, pressure the NRA, or, as courts have said, “The analogy is to killing a person by cutting off his oxygen supply rather than shooting him.”[19] Such punitive pressure against the NRA’s speech did not suddenly become constitutional just because it was applied to a middleman as opposed to the target the state official was trying to silence.[20] 

Jawboning comes up in the internet context because every party posting content on the internet is dependent on other systems and services. Because the First Amendment protects freedom of speech, if a government actor disapproves of any online expression of speech, they might consider pressuring third-party intermediary services on which internet users depend to facilitate their expression to remove the objectional content.[21] But just as the state actor in Vullo could not pressure the insurance companies that the NRA depended on as a means of influencing the NRA’s expression of speech, neither can state actors impose laws that pressure internet intermediaries to interfere with users’ online expression of speech. The First Amendment forbids impeding free speech for any purpose—even copyright issues.

The DMCA was passed in 1998 as part of a broader bill updating the copyright statute.[22] Among other changes, it added two new sections to the 1976 Copyright Act at Title 17: Section 1201 et seq. (which is not relevant to this discussion) and Section 512 (which is relevant to this discussion). The latter section created four statutory safe harbors, each of which applies to a different type of internet intermediary (i.e., ISP), depending on the type of intermediating service it provides, to help insulate the intermediary from any secondary liability they might otherwise face arising from how others use their systems and services.[23] To be eligible for any of these safe harbors, providers must comply with several criteria, some specific to each, and some applicable to all.

The first safe harbor, codified at Section 512(a), applies to providers of “Transitory Digital Network Communications.”[24] Examples of these intermediaries are the companies that provide internet access generally, or what we refer to as “ISPs,” including broadband ISPs like Verizon and Cox. The second safe harbor, codified at Section 512(b), applies to providers of “system caching.”[25] Examples of these intermediaries are Akamai and Cloudflare, which help speed up content delivery on the internet by holding onto copies of content at network midpoints so they can be served up to users more quickly.[26] These two safe harbors are the most straightforward of the four and have the fewest criteria with which eligible providers must comply.

The third safe harbor, codified at Section 512(c), applies to service providers that enable information to be “stored at the direction of users.”[27] When people speak of the “DMCA,” they are often referring to this safe harbor and the many internet services it covers, such as social media sites or YouTube, where users have published, or “stored,” some “information” they have expressed.[28] Most notably, this safe harbor, which is the most complex of the four in terms of its criteria, includes the requirement that providers abide by a “notice-and-takedown” system.[29] Under this system, the putative copyright holder or its agent sends a “notice,” which is sometimes colloquially referred to as a “takedown demand,” to the service provider alleging that some material a user has posted violates their copyright.[30] Having now been put on notice of potential infringement, the service provider must then act to quickly remove the allegedly infringing material to avoid sharing in liability for it.[31] 

The fourth safe harbor, codified at Section 512(d), applies to “information location tools.”[32] Examples of these include search engines like Google and Bing. A provider seeking to avail themselves of this safe harbor would have more provisions to comply with than those using the (a) or (b) safe harbors but not as many as the (c) safe harbor, which is the most detailed.

A few provisions apply across all four safe harbors. One of these provisions is that the providers must have a policy for terminating users who are repeat infringers.[33] The statute is not specific as to what policy is required; the language simply calls for there to be “a policy.”[34] Historically, courts have read it flexibly, and providers’ practices were accordingly varied.[35] But in cases like Cox and its successors, courts have been concluding that this provision requires establishing a concrete rule for how many times a user can be accused of infringement before their account is terminated.[36]

The notice-and-takedown system, which is distinct from the repeat infringer provision, applies only to the 512(c) safe harbor, although the 512(b) and (d) safe harbors reference it.[37] One reason for the notice-and-takedown system is that the DMCA excuses intermediaries from having to police their own systems for infringement.[38] This is a good thing because such policing would be impossible with the large volume of content involved.[39] Policing for copyright infringement would also require the service providers to know much more about a work’s ownership, applicable licenses, and other stipulations regarding its use than feasible.[40] The DMCA puts the onus of having such knowledge on the copyright owner, who at least in theory should be better positioned to know those details.[41] 

In practice, however, not everyone who sends takedown notices actually owns what they claim to own. The Cox case showcased this problem. There, the district court dismissed an entire set of claims by one of the plaintiffs, who had earlier sent takedown notices, because the plaintiff had no right to enforce a copyright on the implicated works.[42] And even when the ownership claims are legitimate, not every copyright owner adequately considers whether the work appearing on a provider’s system may in fact be a fair, rather than an infringing, use and therefore lawful, even though they are supposed make that determination before demanding the work’s removal.[43] 

Despite these inherent defects with the notice-and-takedown system, the provisions of this safe harbor nevertheless require the service provider to act on these notices upon receipt and treat them as presumptively valid, even when they are not. Because there is no requirement that these claims first be tested in court, service providers have no effective way to weed out invalid takedown demands, and trying to assess their validity instead of automatically removing the content risks waiving the safe harbor. Thus, the safe harbor system effectively forces intermediaries to act upon accusations, rather than proven fact, often treating user expressions of speech as presumptively wrongful and punishable even though they may not be.

There is a “put back” provision in the DMCA, where a speaker whose expression has been removed can challenge the removal via a counter-notice.[44] But this provision has proven to be an inadequate remedy for wrongful takedowns. It exposes the user who posted the content to significant risk, not just legally in how it invites a claimant to sue them, but also as a general safety risk given that these counter-notices identify them to people who may be objecting to their speech (which also compromises the user’s right to express speech anonymously).[45] As a result, these counter-notices are rarely executed.[46] 

Regardless of its original intention, the reality of how the DMCA works is that, all too often, speech that should not be punished is punished, as are the users posting that speech, regardless of whether the accusations are valid.

To understand why service providers cannot afford to be cavalier about whether they are protected by a DMCA safe harbor, one needs to consider the nature of the internet and service providers’ role in it.

The internet is a communication medium that depends on intermediaries to help content, including expression of speech, get from one person to another. These intermediaries, referred to in the DMCA as service providers, come in all shapes and sizes. The DMCA safe harbors catalog several buckets that they may fall into, but, even within each category, the types of service providers are myriad, ranging from providers of high-capacity backbone infrastructure to ISPs that serve as the onramps for users to connect to the wider internet. There are also other forms of online services that help users express themselves and interact with others’ online expression, which can include email, social media, streaming services, or online storage. Together, these service providers allow people to engage with others’ content in infinite ways, be it contemporaneously or asynchronously, via one-to-one communications or one-to-many, and via stored information or live exchange.

The fundamental reality, however, is that no internet communication could happen at all without these providers being available and able to help facilitate that exchange of information and expression. It is also a fundamental reality that no such intermediary could be available to serve that function if it were not legally safe to do so.[47] If providers could be forced to share in whatever liability might be manifest in the expression they help facilitate, it would be too risky to help facilitate it.[48] Intermediaries are also not necessarily all large, well-capitalized companies. Some are run by non-profits, and intermediary services can even be offered by individuals, like those with their own blogs that allow for user interaction in comments. Because of the very nature of the internet, even the smallest providers can facilitate enormous amounts of information and expression, especially in proportion to their own size and resources. Furthermore, the risk for providers is not just whether they might ultimately be found liable for how users make use of their services; even if there were no liability manifest in the user content, simply having to defend against claims of liability, even when the claims are non-meritorious, can itself be ruinous. Even a single accusation can be catastrophic, let alone the infinite number of possible accusations that come with high volumes of user activity.[49] 

Even if it were somehow feasible for intermediaries to try to police all the user expression they facilitate, the volume of expression would make it impossible to effectively police it. There is no way for a provider to know with any certainty whether any expression is actually wrongful in some way. Providers of internet systems and services are not courts; they do not have access to the advocacy and evidence that could help lead to a reasonably reliable finding of wrongfulness.[50] Instead they can do little more than guess. But because the risk of guessing incorrectly can be so devastating to a platform’s viability, potentially bankrupting them with litigation and liability, providers will always have to err in favor of presuming that the content is wrongful and removing it.[51] 

Without some durable form of protection shielding service providers from liability arising from how users make use of their systems, the legal risk of providing intermediary services would be too overwhelming. That risk would make it impossible for any intermediaries to be able to afford to facilitate any online expression, no matter how lawful or socially valuable that expression might be. As a result, it would be very difficult for any lawful content to remain online because service providers would frequently have to refuse it to protect themselves—assuming, of course, that in the absence of adequate statutory protection they could ever be in the business of facilitating any user expression at all.

This potential outcome concerned Congress, so to ensure that the internet could remain a vibrant communication medium, legislators passed several laws to provide intermediaries with the statutory protection needed to limit liability worries and facilitate user expression.[52] The first statute was Section 230 of the Communications Decency Act, passed in 1996.[53] Its operation is straightforward: If the provider’s own expression is at issue, the statute offers no protection from any liability that might arise from the problematic expression, but if that expression were created by another, Section 230 shields the provider.[54] The provider does not need to do anything to qualify for this protection; it functions as a straightforward immunity rather than a safe harbor. As a result, the protection is more durable and useful and also avoids most issues of jawboning. The statute’s protection pivots only on who created the content at issue.[55] 

Section 230 is an effective statute that serves a critical purpose in insulating internet intermediaries, making it so they can be available and able to help facilitate online expression. But there is a limitation built into it that constrains its purview: If the accusation of wrongfulness in the user expression relates to “intellectual property,” Section 230 provides no protection.[56] There is some dispute over what types of claims are covered by the “intellectual property” exception, but it is generally accepted, at minimum, to apply to “federal intellectual property” claims, which would include copyright.[57] For these exceptions, Congress endowed providers with an alternative form of statutory protection with the DMCA, which was passed a few years later.

Unlike Section 230, however, the DMCA offers providers a conditional safe harbor only, rather than full immunity, which gives less certainty to providers and therefore less reliable protection. Still, it is an important law with regard to provider protection because its essential job is the same: to make it possible to provide systems and services that facilitate users’ expression without having to fear punishment for how others used those systems and services.[58] But in its current form, it does that job suboptimally. The end result is that the DMCA, although intended to provide the statutory protection needed to ensure that others’ speech can be facilitated online, instead ensures it will not be because of jawboning.

The case Lenz v. Universal tells the story of a mother who one day posted a video on YouTube of her toddler dancing along enthusiastically to the song “Let’s Go Crazy” by the artist known as Prince.[59] But her desire to share her child’s joy was soon stymied by the record labels that owned the performance rights for the recording. They sent a takedown demand to YouTube to have her video removed, without any consideration for how this particular use of the copyrighted song was likely fair, rather than infringing, and thus lawful.[60] The video was eventually restored, but not before the mom’s right to convey a truth about her life had been trampled by the removal.[61] 

This example of lawful speech being removed in response to takedown notices is far from a rare exception. Since the DMCA has been in effect, countless instances of lawful, non-infringing speech have been removed, including political speech, competitor speech, and other criticism.[62] The DMCA has also been used to target content because of copyright concerns even if it is not infringing on the copyright because it is fair use, public domain, or owned by someone else.[63] In fact, it may not violate copyright at all.[64] This is such a common occurrence that sites have sprouted up to archive takedown notices so that these demands to remove online expression can be tracked as best as possible.[65]

Such practices have substantially harmed speech over the years. But even one instance of the removal of lawful speech via a demand authorized by the DMCA is unconstitutional. When the First Amendment says “make no law […] abridging the freedom of expression,” it does not include any qualifiers that permit freedom of expression to be curtailed in certain circumstances; that is, the language of the First Amendment stands as a permanent bar that prevents any statute from ever stifling lawful speech. The premise that the DMCA is built around—that takedown notices always represent legitimate claims of copyright infringement—has simply not been borne out.[66] But as long as mere accusation is able to compel intermediaries to remove content without it ever having been adjudicated to be wrongful, the removal of lawful speech due to invalid demands is inevitable.

The problem has only gotten worse in recent years, as illegitimate takedown notice senders play games with the repeat infringer policies providers must now implement.[67] Because providers now have to terminate users who have received a few accusations, bad actors looking to censor such users can manipulate that obligation to their advantage. These actors can target specific users simply by sending a few bogus complaints to try to drive them off a service.[68] Thus, instead of having to counter criticism, censors can silence the critic by leveraging the repeat infringer requirement, knowing that their infringement claims do not need to be validated to terminate a user.[69] 

Moreover, multiple takedown notices may not even be necessary because even one notice can cow a target into self-censorship, leading them to remove potentially lawful expressions of speech to avoid accruing a “strike” and putting their ability to use their preferred platform at risk.[70] Because being terminated by a provider can mean getting completely cut off from long-cultivated audiences and potentially even livelihoods, it is a risk many users cannot afford to take.[71] 

This harm to speakers and free speech is exactly what jawboning purposefully exacts: the silencing of speech that the government disfavors. Even if the intention were to be limited to true instances of infringement, the mechanism itself is not so limited; it indiscriminately silences everything it can get in its sights. It also does not matter whether the user or provider suppresses the free speech; jawboning aims to make sure that pressure on the intermediary causes speech to be suppressed.

The constitutional harm caused by the current version of the DMCA goes beyond simply censoring instances of free speech. Speakers themselves are suffering sanctions because of the way courts have begun to interpret the DMCA’s repeat infringer provision. Providers must now not only remove user speech to protect themselves, but also revoke repeat offenders’ use of their systems and services entirely. This interpretation of the DMCA increases its jawboning problem by raising the stakes: Mere accusation can now do more than just silence any given expression—it can silence an entire source of expression.

Furthermore, the way this provision now applies to intermediaries using the 512(a) safe harbor makes the risk of this harm to speakers especially acute. In Cox and other subsequent cases, courts have interpreted the repeat infringer provision such that the obligation to terminate users is tied to the takedown notices that the service providers have received, regardless of their validity.[72] This interpretation means that intermediaries are now obligated to treat all received notices as presumptively valid, and, in situations where enough notices have accrued against any particular user, to terminate that user’s access to their service—and they are so obligated even when the notices are incorrect.[73] 

In fact, even in the Cox case, the district court found many of the copyright ownership claims to be invalid and thus dismissed infringement claims predicated on them.[74] Yet the court still found that Cox lost its safe-harbor defense by not terminating users after having received takedown notices, all of which it apparently needed to regard as presumptively valid, even though, as the court itself acknowledged, many were not.[75]

In addition, Cox effectively created a new requirement for the 512(a) safe harbor for providers to receive and respond to takedown notices that the statutory text had never before required. Takedown notices are entirely products of the 512(c) safe harbor.[76] While the takedown notices described in 512(c) are also used in the context of the 512(b) and (d) safe harbors, the 512(a) safe harbor makes absolutely no mention of them. This absence matters because, while even in the 512(c) context takedown notices are subject to abuse, the language of this safe harbor at least articulates some requirements that these notices must meet to be valid.[77] These requirements are not particularly robust, nor are they adequate to deter abuse, but they point to the importance of defining some sort of criteria.[78] However, no validity requirements exist in the statute to constrain anyone from sending notices to providers who are eligible for the 512(a) safe harbor because this safe harbor does not hinge upon or involve takedown notices of any sort, let alone those described as part of the 512(c) safe harbor. As a result, the notices that are sent to service providers using the 512(a) safe harbor are even more likely to represent invalid claims because there is no criteria in the statute to which they must comport.

Another reason that the DMCA is capable of causing acute harm to those exercising free speech online is because many of the 512(a) intermediaries that are now obligated to terminate user access are general-purpose, broadband ISPs. While it would still be sufficiently problematic to cut users off from the various specialized services eligible for the 512(c) safe harbor, cutting them off from full-service broadband ISPs causes even more serious harm because the broadband ISP market is not a particularly competitive one.[79] Users may effectively have only one choice of provider in their areas.[80] If they lose access to that provider’s services, then they may lose access to the internet altogether. Given how ubiquitously important the internet is, losing online access is a particularly injurious consequence to a user—even one who might deserve some sort of penalty.[81] It is completely untenable to inflict such a penalty on an individual user, but it is even more problematic when the terminated “user” is an ISP itself, thereby terminating service to hundreds, if not thousands, of people who did nothing wrong.[82]

The problem with jawboning in general is that it gives the government a way to indirectly attack speech that it cannot attack directly.[83] By pressuring an intermediary that a user relies on to post content and forcing the intermediary to act against speech, the government is attacking the user’s First Amendment rights to free speech.[84] But in so doing, it  is also attacking the rights of the intermediary, which now loses the ability to decide for itself what users and content it wants to be associated with.

In the 2024 case of Moody v. NetChoice, the Supreme Court recognized that the First Amendment protects providers’ discretion to choose what users and content to allow on its platform.[85] It was a decision reached in response to twin challenges of laws passed by Texas and Florida, which had each tried to force providers to moderate content, as the state preferred.[86] Each of these states’ laws was an example of jawboning: The states had a preference for the type of content that should appear online, and the states’ laws applied legal pressure against the intermediaries the users relied on to share their speech online, thereby ultimately shaping what speech could be posted.[87] But the First Amendment stands against this proposition.[88] Like a newspaper, which cannot be forced to run op-eds it does not wish to run, neither can internet intermediaries constitutionally be forced to platform or deplatform expression.[89] And yet this sort of interference with providers’ choices is exactly what the DMCA enables. Receiving a takedown notice effectively overrides provider preference for whether it would wish to continue facilitating a user’s speech that has never been found by a court to be infringing.

That the provider ostensibly “chooses” to remove the targeted material, rather than ignore it, does not change the jawboning math. As the Supreme Court explained in Vullo, the fear of sanction for not taking action in response to a takedown demand is enough to amount to impermissible legal force to take the action. Allowing that fear of sanction to override the volitional discretion protected under the First Amendment inevitably results in harm to speech that should have been protected.[90]

Thus, expressive harm also extends to the provider when it loses its moderation discretion. And not just in that it loses its choice to moderate the way it prefers to moderate but also in whether it needs to moderate at all. This is important because not all intermediaries are able to comply with the DMCA requirements. For example, small intermediaries can easily become overwhelmed by the resource drain of complying with the demands of a potentially high volume of takedown notices.[91] These compliance costs can come at the expense of providing high-quality intermediating services to users, or even providing services at all. Providers that are unable to comply with the DMCA, may have to stop providing intermediary services. These issues also make it that much harder for new intermediaries to gain a foothold in the market. This will reduce the number of providers in the market. Moreover, given how the DMCA favors larger providers with better resources, the larger providers will be the few that are left.[92] Even worse, if fewer providers are left, the consequence of users being terminated from any one of the intermediary providers becomes that much more problematic.  

Another way that providers suffer from takedown notices is in the removal of more and more material from its systems. The core interest of a service provider is to provide its service to users by hosting, serving, and otherwise facilitating user expression. The DMCA intentionally takes aim at the intermediary’s raison d’etre and interferes with it.

And it takes this aim unnecessarily, as Section 230’s example demonstrates. With that statute, Congress was able to provide protection to intermediaries with no strings attached and still achieve the results it wanted. As with the DMCA, when Congress passed Section 230, it had a clear preference for the type of expression that would appear online.[93] But unlike the DMCA, which tries to force providers to moderate their sites as the government prefers, Section 230 instead uses the liability protection it offers to incentivize the providers to make their own decisions accordingly.[94] The government still tries to exert influence over online speech, but without unconstitutionally interfering with providers’ own constitutionally protected editorial discretion—or the rights of its users. In this situation, if a user’s expression of speech is removed, it will be because the provider exercised their own rights to choose to remove it—not because the government unconstitutionally demanded it.[95] 

The DMCA is very much a shoot-first-ask-questions-later-if-ever sort of law. Allowing such severe consequences to online speech before any court has adjudicated it is not happening incidentally; it is happening by design.

In developing the DMCA, Congress actively created a framework for jawboning. It pursued the goal of eliminating copyright infringement online not by further opening the door to claims against actual infringers, but by creating an extrajudicial remedy—and a purposefully censorial one—against alleged infringements that would avoid the courthouse entirely.[96] The DMCA was supposed to make it easy to remove user expression, and that is exactly what has transpired.[97] And would-be censors well understand the power the statute has handed them: All one has to do is make an accusation that there has been infringement to scare the intermediary into acting against the targeted expression; no substantiation of the claim is needed. The ease of sending notices and exploiting this platform vulnerability has created enormous censoring power, distributed that power widely, and—given how the DMCA is currently written and enforced—made that power easy and free to wield.[98] 

While some copyright owners have argued that there is a cost associated with enforcing their copyrights, sending takedown notices is orders of magnitude less costly than pursuing litigation.[99] And for takedown notice senders who are unconcerned about the validity of their demands, there is effectively no cost at all. It is therefore no surprise that so much user content continues to be removed, regardless of whether it is problematic or not, without the oversight of the courts. There is no need for this reiteration, we can cut the sentence and throw the footnote on the one before it.[100] With the DMCA, someone complaining about online expression no longer needs to prove their case in court; they can instead simply apply pressure on intermediary providers to obtain a much more expedient remedy. And they can do so without any of the safeguards that judicial due process helps challenged expression avoid such unjust punishment. This arrangement presents a constitutional problem because prior restraint is generally forbidden by the First Amendment.[101] Yet pre-adjudicative sanction is what the DMCA was purposefully designed to deliver.

There is nothing inherently unconstitutional about statutory immunity for intermediary providers. If anything, such immunity vindicates the values of the First Amendment, helping enable more online speech by ensuring that providers can be available and able to facilitate it.[102] But, as currently drafted and interpreted, rather than advancing the First Amendment, the DMCA offends it. To be a valid exercise of legislative power, the safe harbor needs to lose its coercive jawboning pressure and instead simply offer the intermediary liability protection it promised.

Ideally, any statutory protection should be unconditional and written similarly to Section 230. If a provider has to litigate whether it has adequately complied with the conditions to be entitled to protection, then the benefit of the protection is lost. Having to spend resources litigating can be financially devastating to a provider, even bankrupting them.[103] 

However, if there are to be conditions for protection, those conditions cannot include having to act against others’ speech.[104] Congress can and should take a number of steps to at least mitigate the constitutional problems with how the DMCA’s conditions are currently drafted and interpreted.

One example of low-hanging fruit is to be more explicit in the legislation that takedown notices have no bearing on the “repeat infringer” provision applicable to any of the safe harbors. In the way the statute is currently drafted, the repeat infringer provision would seem to require repeat instances of actual infringement and not just repeat accusations of infringement.[105] Finding the receipt of accusations alone to be a predicate for triggering the repeat infringer provision and its obligation to terminate users is entirely a judicial invention—and a highly problematic one not supported by the statute.[106] Before the Cox case, providers had discretion as to whether, when, or how to terminate users and, outside the copyright context, still do.[107] That discretion needs to be restored.

It especially needs to be restored for the 512(a) safe harbor, where takedown notices have absolutely no function.[108] This discretion should also be restored for the 512(c) and other safe harbors as well, which share the same “repeat infringer” termination requirement. The statutory language articulating the requirement makes no reference to any interplay between the receipt of takedown notices and any requirement to terminate users, so there should be none.

Removing this invented connection would eliminate much of the jawboning pressure that is currently causing providers to silence users. Eliminating the forced termination of so many users would help alleviate the constitutional harm that arises when a user is silenced—and, in the case of the 512(a) safe harbor, potentially driven off the internet entirely.[109] But the bigger issues with the DMCA are both how it stands as a system of prior restraint, requiring that intermediaries remove online speech and ban platform users without any judicial finding of wrongfulness, and how easy it is for that prior restraint to be exploited by people who want to use jawboning pressure to remove speech they do not like.

Thus, at minimum, the statute must be redrafted to deter and appropriately punish invalid takedown notices—and not just abusive ones, as even innocent mistakes still cause censorial harm.[110] If a sender claims to be the copyright holder in their takedown request, they must actually be the copyright holder, and there must be a meaningful and deterring consequence if they are not.[111] There must also be a meaningful and deterring consequence if the takedown demand fails to account for a licensed use, a fair use, or a lack of a copyright at all. These meaningful consequences are needed to not only protect speakers and providers, but also to protect legitimate copyright holders themselves who can be harmed by the officious meddling of invalid claims.[112] At the current moment, there is no effective way for claims to ever be tested, which means there is effectively no requirement that takedown demands actually be valid, nor is there any consequence to the sender if they are not.

In theory, there is some deterrence built into the statute, at Section 512(f), which should allow parties harmed by invalid takedown notices to be able to sue the sender for damages.[113] There have been a few instances of affected speakers, and even providers, who have successfully pursued some form of redress this way.[114] But by and large, the provision has been interpreted in a way that makes it largely ineffective in stopping unjust takedown demands. The courts have again read into the statute language that is not present, which has defanged this deterrence provision.[115] One potential solution to this issue would be to redraft the statute in a way that gives this provision enough teeth to curb the abuse.[116]

A better solution, however, would be to eliminate the power of invalid takedown notices. If there is to be an obligation for a provider to act, that obligation should be triggered only after there has been a judicial finding of infringement, and not just when there has been the mere suggestion of it. Not only would such a change alleviate many current problems with jawboning-induced prior restraint but it would also make the statute more internally coherent. After all, the point of a takedown notice is to supply the platform with the knowledge that there is infringement on its service.[117] But, as has been consistently born out over the past 25 years, it is not possible for these takedown notices to reliably convey such knowledge.[118] All the notices can convey is that someone suspects infringement, not that there definitively is any. It would take a court, and evidence, to be able to make that determination and justify sanctioning the posted material or the user who posted it.

Requiring courts to weigh in before content could be removed would not be sufficient to fully eliminate the jawboning problem, however, because this requirement may simply change the form of government pressure placed on providers from legislative to judicial. The risk of censorial abuse would still remain because, although requiring judicial oversight before infringement complaints can be regarded as valid will afford affected speakers more due process, it still may not be enough due process.[119] The only real solution is to remove the jawboning pressure from the statute altogether, so that provider protections are never contingent on having to respond to any sort of removal demand.

As noted previously, the jawboning problem with the DMCA calls for the statute to be amended. The DMCA in its current form accentuates the problems with jawboning by allowing government-driven legal pressure on one party to affect the speech rights of another. Given how the DMCA encourages so many takedown demands, the statute exacerbates the very problem it created.[120] 

But simply amending the DMCA on its own will not be enough because the root of the problem is less with the DMCA and more with copyright law itself, particularly with respect to the judicially created doctrines that secondary liability courts have baked into it but that the statute itself says nothing about. In this system, intermediaries can face potentially extreme consequences, including enormously high statutory damages, if they allow speech on their systems and services that might be legally wrongful.[121] Statutory damages is itself an area of copyright law also worth reforming.[122] But these doctrines of contributory and vicarious liability are the real danger for providers; they are the looming threat that makes the DMCA’s jawboning so effective in driving providers to act against their users and the content on their platforms.[123] The DMCA is not speech-protective enough, given how it ends up providing the vector through which impermissible, jawboning-driven removal of user expression occurs.

Still, if we were to repeal the DMCA and remove its statutory protection for providers, the jawboning pressure would continue and be even worse. It is copyright law itself that is the coercive force against providers that causes them to be complicit in silencing speech. The DMCA “did not simply rewrite copyright law for the on-line world.”[124] Liability for direct, contributory, or vicarious copyright infringement should be evaluated just as it would have been in the offline context.[125] But for the DMCA providing some relief from that pressure, providers would have to silence even more user expression than they already do.[126]

The real solution is to better tune copyright law to make sure the specter of secondary liability cannot interfere with the important intermediary service providers offer in helping users express themselves freely online.[127] It is courts construing that work as something wrongful that leads to the disproportionately silencing collateral effects on online speech.[128] The internet need not be a lawless place; speakers can still be held responsible for their own speech. But liability should not extend to the intermediaries who help them speak if we want to make sure intermediaries will still be available to provide users with a platform for their speech.[129] 

The jawboning that corrupts the otherwise important statutory immunity provided by the DMCA is unfortunately not unique to it. To lawmakers looking for ways to remove certain online speech they do not agree with, the DMCA seems to serve as an instruction manual. Key aspects of the law, like the notice-and-takedown system, are metastasizing and infecting other policy areas.[130] Given the DMCA’s apparent “success” (at least in how it has managed to remain on the books, unchanged and unchallenged, for a quarter of a century) legislators likely find it tempting to emulate its operation in other contexts. The lesson it teaches is that if lawmakers want certain content removed, all they need to do is make it so that when someone tells the provider about the “infringing” speech, the provider will take care of it because they will have to do so.[131] Recognizing how the DMCA’s current operation is unconstitutional is therefore key to removing the temptation to unleash additional, similar jawboning efforts targeting online speech and its intermediary providers.

But simply deterring further notice-and-takedown regimes is not enough to quell the constitutional problem because constitutional infirmity transcends the notice-and-takedown scheme. At the heart of such policies is the government’s desire to subordinate providers’ own right to choose what user expression to facilitate, and to use the fear of sanction as leverage. The unconstitutionality arises because that fear is only dulled by yielding to the government’s demands to moderate as it prefers. It makes no difference constitutionally whether these instructions are conveyed via a notice-and-takedown regime or some other compulsion that forces the providers to obey.

For instance, even though the government can enforce antitrust policy, it would create a jawboning problem if the government threatened to break up companies providing internet intermediary services if they did not moderate user speech the way the government preferred.[132] “Act against speech in the way we prefer, or face a dire legal consequence,” is the essence of jawboning, even if that consequence is not specifically the loss of a safe harbor. What makes it jawboning, and therefore constitutionally problematic, is that a provider might suffer any punitive consequence at all if it does not act against speech in the way the government wants.

The Constitution forbids such threats as a policy tactic. Congress cannot create a legal mechanism for pressuring internet intermediaries to shape what expression appears online. It does not matter that Congress itself is not directly doing the censoring; as with the DMCA it has still created the legal power to inflict it.[133] As the Supreme Court has confirmed on several occasions, “[i]t is well established that, as a general rule, the Government ‘may not suppress lawful speech as the means to suppress unlawful speech.’”[134] When the government’s plan of attack against illicit online expression intentionally achieves so much collateral damage to protected expression—either because it is lawful or, even if unlawful, still entitled to due process—then it is not a policy that can be constitutionally suborned.

Intermediary liability protection is a good thing. But conditioning that critical protection on the removal of speakers and their speech by the intermediary service provider transforms the statute providing that protection from being something that fosters speech into something that unconstitutionally censors it via jawboning pressure. The DMCA should therefore be amended to remove those unduly pressuring qualities so that providers, and the user expression of speech they facilitate, are no longer vulnerable to undue, government-driven censoring.


[1]. Compl. (“Verizon Complaint”) 3, UMG Recordings, Inc. v. Verizon Communications, Inc., No. 24-cv-528 (S.D.N.Y. filed July 12, 2024).

[2]. Ibid., pp. 21-22.

[3]. BMG Rights Management v. Cox Communications (Cox), 881 F.3d 293 (4th Cir. 2018); Sony Music Entertainment v. Cox Communications (Cox II), 93 F.4th 222 (4th Cir. 2018); Cox Pet. for Writ of Cert. (“Br. Pet. Cox”) 37, Cox Communications, Inc. v. Sony Music Entertainment, No. 24-171 (Aug. 15, 2024).

[4]. Digital Millennium Copyright Act of 1998, Pub. L. No. 105-304, 112 Stat. 2890.

[5]. UMG Recordings, Inc. v. Shelter Capital Partners, 718 F.3d 1006, 1022 (9th Cir. 2013). See Viacom Intern., Inc. v. YouTube, Inc., 676 F.3d 19, 27 (2d Cir 2012); 17 U.S.C. § 512(a). See also Shelter Capital, 718 F.3d at 1028.

[6]. 17 U.S.C. § 512(c)(1)(A)(iii); 17 U.S.C. § 512(i)(1)(A).

[7]Cox, 881 F.3d at 301-303.

[8]. Nat. Rifle Assoc. of America v. Vullo, 144 S. Ct. 1316 (2024).

[9]. Derek Bambauer, Against Jawboning, 100 Minnesota Law Review 51 (2015).

[10]Murthy v. Missouri, 144 S. Ct. 1972 (2024); Moody v. NetChoice, 144 S. Ct. 2383, 2393 (2024).

[11]. “Contempt Sentence Upheld,” Spokane Daily Chronicle, April 8, 1966; “Jawbone,” The New Slang, Los Angeles Times, Sept. 3, 1917; Wallace Smith, “Slang Our Soldier Boys Use,” San Francisco Examiner, Sept. 27, 1917; “How “Jawbone” Came To Mean Credit,” The New York Times, Feb. 21, 1918.

[12]. Holmes Alexander, “Touch of Caesarism Seen In The White House,” The Desert Sun, March 16, 1966; See Hearing on The 1970 Economic Report of the President Before the Joint Economic Committee, 99th Cong. (1970) (statement of George P. Schultz, Secretary of Labor).

[13]. Derek E. Bambauer, “Against Jawboning,” Minnesota Law Review 100:51 (2015), p. 57.

[14]. Ibid.

[15]. Ibid.

[16]. Ibid., p. 57; Daphne Keller, “Who Do You Sue?,” Hoover Institute Aegis Series Paper No. 1902, 2019. https://www.hoover.org/sites/default/files/research/docs/who-do-you-sue-state-and-platform-hybrid-power-over-online-speech_0.pdf.

[17]Vullo, 144 S. Ct. at 1323.

[18]. Ibid., p. 1329.

[19]. Ibid., p. 1331.

[20]. Ibid., p. 1331.

[21]. Keller. https://www.hoover.org/sites/default/files/research/docs/who-do-you-sue-state-and-platform-hybrid-power-over-online-speech_0.pdf.

[22]Viacom, 676 F.3d at 26-27.

[23]. 17 U.S.C. § 512(k)(1); Shelter Capital, 718 F.3d at 1014.

[24]. 17 U.S.C. § 512(a).

[25]. 17 U.S.C. § 512(b).

[26]. “How Does a CDN Work?,” CDNetworks, May 4, 2024. https://www.cdnetworks.com/blog/web-performance/how-content-delivery-networks-work.

[27]. 17 U.S.C. § 512(c).

[28]Viacom, 676 F.3d at 38-39.

[29]. 17 U.S.C. § 512(c)(3).

[30]. 17 U.S.C. § 512(c)(3)(A).

[31]. 17 U.S.C. § 512(c)(1)(A).

[32]. 17 U.S.C. § 512(d).

[33]. 17 U.S.C. § 512(i)(1)(A).  

[34]. Ibid.; Jennifer M. Urban et al., “Notice and Takedown in Everyday Practice,” UC Berkeley Public Law Research Paper (March 22, 2017). https://ssrn.com/abstract=2755628.

[35]. Perfect 10, Inc. v. CCBill LLC, 488 F.3d 1102, 1109-1110 (9th Cir. 2007); Urban. https://ssrn.com/abstract=2755628.

[36]. Br. Pet. Cox 1.

[37]. 17 U.S.C. § 512(b)(2)(E); 17 U.S.C. § 512(d)(3).

[38]. 17 U.S.C. § 512(m); Viacom, 676 F.3d at 35.

[39]Moody, 144 S. Ct. at 2395; Zeran v. America Online, Inc., 129 F.3d 327, 331 (4th Cir. 1997).

[40]Shelter Capital, 718 F.3d at 1022 (citing See S.Rep. No. 105-190, at 48; H.R.Rep. No. 105-551, pt. 2, at 57-58).

[41]. Ibid. 

[42]Cox I, 149 F. Supp. 3d 634.

[43]. Urban. https://ssrn.com/abstract=2755628; Lenz v. Universal Music Corp., 815 F.3d 1145, 1153 (9th Cir. 2015).

[44]. 17 U.S.C. § 512(g).

[45]. 17 U.S.C. § 512(g)(3)(D); McIntyre v. Ohio Elections Comm’n, 514 US 334, 357 (1995).

[46]. Urban. https://ssrn.com/abstract=2755628.  

[47]Shelter Capital, 718 F.3d, p. 1014.

[48]Zeran, 129 F.3d, p. 331.

[49]. “Startups, Content Moderation, & Section 230,” Engine, Dec. 9, 2021, pp. 4-5. https://static1.squarespace.com/static/571681753c44d835a440c8b5/t/61b26e51cdb21375a31d312f/1639083602320/Startups,+Content+Moderation,+and+Section+230+2021.pdf.

[50]. Urban. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2755628.

[51]. Peter Kafka, “Veoh finally calls it quits: layoffs yesterday, bankruptcy filing soon,” CNET, Feb. 11, 2010. http://www.cnet.com/news/veoh-finally-calls-it-quits-layoffs-yesterday-bankruptcy-filing-soon; Zeran, 129 F.3d at 333.

[52]. 47 U.S.C. § 230(a); Batzel v. Smith, 333 F.3d 1018, 1026-1028 (4th Cir. 1997).

[53]. 47 U.S.C. § 230.

[54]. 47 U.S.C. § 230(f)(3); 47 U.S.C. § 230(c)(1); Force, 934 F.3d at 68-71.

[55]Force v. Facebook, Inc., 934 F.3d 53, 68-71 (2d Cir. 2019).

[56]. 47 U.S.C. § 230(e)(2).

[57]CCBill, 488 F.3d at 1118-1119.

[58]Shelter Capital, 718 F.3d at 1014; S.Rep. No. 105-190, at 8 (1998).

[59]Lenz, 815 F.3d at 1149.

[60]. Ibid., pp. 1151-1153.

[61]. Ibid., pp. 1149-1150.

[62]. Trevor Potter correspondence with Chad Hurley, e-mail and U.S. mail, Oct. 13, 2008. https://perma.cc/C8AR-4XZ5; Compl. 6, Shopify Inc. v. Doe 1, No. 23-cv-9102 (S.D.N.Y. filed Oct. 16, 2023); Shreya Tewari, “Over thirty thousand DMCA notices reveal an organized attempt to abuse copyright law,” LumenDatabase.org, April 22, 2022. https://lumendatabase.org/blog_entries/over-thirty-thousand-dmca-notices-reveal-an-organized-attempt-to-abuse-copyright-law.

[63]. Urban. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2755628.

[64]. Ashley Belanger, “Parody site ClownStrike refused to bow to CrowdStrike’s bogus DMCA takedown,” Ars Technica, Aug. 6, 2024. https://arstechnica.com/tech-policy/2024/08/parody-site-clownstrike-refused-to-bow-to-crowdstrikes-bogus-dmca-takedown.

[65]. “Chilling Effects / Lumen Database,” Library of Congress, last accessed Oct. 1, 2024. https://www.loc.gov/item/lcwaN0008229.

[66]. Urban. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2755628.

[67]. “Community Guidelines strike basics on Youtube,” YouTube Help, Google, last accessed Sept. 26, 2024. https://support.google.com/youtube/answer/2802032?hl=en.

[68]. Compl. 8-10, YouTube LLC v. Brady, No. 19-353 (D. Neb. filed Aug. 19, 2019).

[69]. Tim Cushing, “Law Enforcement Officer Openly Admits He’s Playing Copyrighted Music To Prevent Citizen’s Recording From Being Uploaded To YouTube,” Techdirt, July 6, 2021. https://www.techdirt.com/2021/07/06/law-enforcement-officer-openly-admits-hes-playing-copyrighted-music-to-prevent-citizens-recording-being-uploaded-to-youtube.

[70]. Andy Maxwell, “TV Museum Will Die in 48 Hours Unless Sony Retracts YouTube Copyright Strikes,” TorrentFreak, Sept. 4, 2023. https://torrentfreak.com/tv-museum-will-die-in-48-hours-unless-sony-retracts-youtube-copyright-strikes-230904; Urban. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2755628; Ernesto Van der Sar, “YouTube Copyright Strike Took Down Livestream Before it Even Started,” TorrentFreak, Jan. 31, 2020. https://torrentfreak.com/youtube-copyright-strike-took-down-livestream-before-it-even-started-200131.

[71]. Drew Harwell and Taylor Lorenz, “Millions work as content creators. In official records, they barely exist,” The Washington Post, Oct. 26, 2023. https://www.washingtonpost.com/technology/2023/10/26/creator-economy-influencers-youtubers-social-media.

[72]Cox, 881 F.3d at 303-305.

[73]. Urban. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2755628.

[74]Cox I, 149 F. Supp. 3d at 653.

[75]. Ibid., p. 655.

[76]. 17 U.S.C. § 512(b)(2)(E); 17 U.S.C. § 512(d)(3).

[77]. 17 U.S.C. § 512(c)(3)(A).

[78]See Lenz, 815 F.3d at 1156-1157.

[79]. Concord Music Group, Inc. v. X Corp., No. 3:23-cv-00606, slip op. at 18 (M.D. Tenn. March 5, 2024); Mike Masnick, “Just A Click Away: How To Improve Broadband Access,” Techdirt, July 18, 2022. https://www.techdirt.com/2022/07/18/just-a-click-away-how-to-improve-broadband-competition.

[80]. Karl Bode, “Two Decades Later And The FCC Is Still Trying To Crack Down On Anti-Competitive Deals Between Landlords And Broadband Monopolies,” Techdirt, March 18, 2024. https://www.techdirt.com/2024/03/18/two-decades-later-and-the-fcc-is-still-trying-to-crack-down-on-anti-competitive-deals-between-landlords-and-broadband-monopolies; Br. Pet. Cox 35.

[81]Packingham v. North Carolina, 137 S. Ct. 1730, 1735-1736 (2017).

[82]. Br. Pet. Cox 2, 35.

[83]Vullo, 144 S. Ct.

[84]. Ibid., p. 1332.

[85]Moody, 144 S. Ct.; Cathy Gellis, “In The NetChoice Cases, Alito And His Buddies Are Wrong, But Even If They Were Right It May Not Matter, And That’s Largely Good News,” Techdirt, July 1, 2024. https://www.techdirt.com/2024/07/01/in-the-netchoice-cases-alito-and-his-buddies-are-wrong-but-even-if-they-were-right-it-may-not-matter-and-thats-largely-good-news.

[86]. Ibid., pp. 2393-2394.

[87]. Ibid., pp. 2395-2396.

[88]. Ibid., p. 2393.

[89]Miami Herald Publishing Co. v. Tornillo, 418 U.S. 241, 258 (1974); Moody, 144 S. Ct. at 2399-2403.

[90]Tornillo, 418 U.S. at 257.

[91]. Urban. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2755628.

[92]. Ibid. 

[93]Batzel.

[94]. Ibid.

[95]Prager University v. Google, 951 F.3d 991, 994 (9th Cir. 2020).

[96]. Urban. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2755628.

[97]. Ibid., p. 42.

[98]. Tim Cushing, “Game Developer Admits It Filed Bogus Copyright Claims, But Says It Had No Other Way To Silence A Critic,” Techdirt, Dec. 19, 2018. https://www.techdirt.com/2018/12/19/game-developer-admits-it-filed-bogus-copyright-claims-says-it-had-no-other-way-to-silence-critic.

[99]. Urban, p. 121. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2755628.

[100]. Section 512 Report, U.S. Copyright Office 19 (2020).

[101]Bantam Books, 372 U.S. at 70.

[102]. Urban. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2755628.

[103]Shelter Capital, 718 F.3d at 1011 (9th Cir. 2013).

[104]. Urban. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2755628.

[105]. 17 U.S.C. § 512(i)(1)(A).

[106]Cox, 881 F.3d at 301-303.

[107]Murthy, 144 S. Ct. at 1981-1982; Prager University, 951 F.3d at 994.

[108]Recording Indus. Ass’n of Am., Inc. v. Verizon Internet Servs., Inc., 351 F.3d 1229, 1234-1235 (D.C. Cir. 2003).

[109]Packingham, 137 S. Ct. at 1737-1738.

[110]. Urban. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2755628.

[111]. Eric Goldman, “Record Label Sends Bogus Takedown Notice, Defeats 512(f) Claim Anyway–White V. UMG,” Technology & Marketing Law Blog, Sept. 15, 2024. https://blog.ericgoldman.org/archives/2024/09/record-label-sends-bogus-takedown-notice-defeats-512f-claim-anyway-white-v-umg.htm.

[112]. Order Granting Summary Judgment, Bungie, Inc. v. Minor, No. C22-371 MJP (W.D. Wash. March 6, 2024).

[113]. 17 U.S.C. § 512(f).

[114]. Eric Goldman, “11th Circuit UPHOLDS a 512(f) Plaintiff Win on Appeal–Alper Automotive v. Day to Day Imports,” Technology & Marketing Law Blog, Aug. 18, 2022. https://blog.ericgoldman.org/archives/2022/08/11th-circuit-upholds-a-512f-plaintiff-win-on-appeal-alper-automotive-v-day-to-day-imports.htm; Eric Goldman, “It Takes a Default Judgment to Win a 17 USC 512(f) Case–Automattic v. Steiner,” Technology & Marketing Law Blog, March 13, 2015. https://blog.ericgoldman.org/archives/2015/03/it-takes-a-default-judgment-to-win-a-17-usc-512f-case-automattic-v-steiner.htm.

[115]Lenz, 815 F.3d at 1153-1154; Urban. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2755628.

[116]. Urban, pp. 128-129. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2755628.

[117]Viacom, 676 F.3d at 30-32.

[118]. Urban. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2755628.

[119]. Hassell v. Bird, 5 Cal.5th 522 (Cal. Sup. Ct. 2018).

[120]. Urban. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2755628; Br. Pet. Cox 10.

[121]Cox II, 93 F.4th at 229l id. at 237; Br. Pet. Cox 36.

[122]. Urban, pp. 129-131. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2755628; Pamela Samuelson and Tara Wheatland, “Statutory Damages in Copyright Law: A Remedy in Need of Reform,” William & Mary Law Review 51:2 (Nov. 1, 2009), p. 439. https://scholarship.law.wm.edu/wmlr/vol51/iss2/5.

[123]Shelter Capital, 718 F.3d at 1028.

[124]. Ellison v. Robertson, 357 F.3d 1072, 1077 (9th Cir. 2004).

[125]. Ibid.

[126]. Urban. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2755628.

[127]. Br. Pet. Cox 3.

[128]Cox II, 93 F.4th at 236.

[129]. Br. Pet. Cox 1.

[130]. S.4569, 118th Congress (2024); “SHOP SAFE Act”, S.2934, 118th Congress (2023); Corynne McSherry, “NO FAKES – A Dream For Lawyers, A Nightmare For Everyone Else,” Techdirt, Sept. 10, 2024. https://www.techdirt.com/2024/09/10/no-fakes-a-dream-for-lawyers-a-nightmare-for-everyone-else.

[131]Zeran, 129 F.3d at 333.

[132]. Lisa Macpherson, “Antitrust or Anti-truth? Jim Jordan’s Latest Attack on the ‘War on Disinformation,’” Public Knowledge, April 1, 2024. https://publicknowledge.org/antitrust-or-anti-truth-jim-jordans-latest-attack-on-the-war-on-disinformation.

[133]New York Times Co. v. Sullivan, 376 US 254, 277-278 (1964).

[134]Packingham, 137 S. Ct. at 1738 (citing Ashcroft v. Free Speech Coalition, 535 US 234, 255 (2002)); Ashcroft, 535 U.S. at 255 (citing Broadrick v. Oklahoma, 413 US 601, 612 (1973)).

Our technology and innovation policy work, in your inbox.