For better tech, we must think bigger than content moderation
Content-focused policies risk censoring speech without fixing the underlying business model that pushes noxious material where it will do the most damage.
If there is one thing that seems to be uniting many policymakers, pundits, and scholars across the political spectrum, it is that the internet and digital media are not the inevitably liberating forces they were purported to be. Apple’s famous 1984 Super Bowl commercial cast its new Macintosh personal computer as a renegade force against an Orwellian Big Brother. But today private companies have accomplished what the twentieth-century state could only dream of: comprehensive tracking of the movements, communications, and purchases of individuals in the United States and across the world.
In her recent essay for Hypertext, Jennifer Burns argues, “Digital media is acting on our politics in obvious ways, but our politics don’t seem to be acting much on digital media.” However, the problems we face today are, unfortunately, not simply the result of political inaction. Instead, they reflect failures of past policymaking. Legal regimes designed long ago have left us with a system that continues to favor the interests of major companies while doing little for many Americans’ concerns about digital media. Of course, we will need to respond more effectively to recent innovations in digital technology. However, we must ensure reforms target the long-standing incentives shaping the tech giants’ decision-making — and recognize that the broader political context matters, too. Often, the features of online life we find most objectionable are symptoms of two deeper problems: (1) the industry’s reliance on digital advertising, which drives its aggressive collection of user data and (2) broader political dynamics that are spurring affective polarization, extremism, and the “dark passions” of hatred and resentment. This essay will focus on (1), as fixing (2) requires a project of liberal-democratic revitalization no digital policy reform alone will be able to accomplish.
Developing an effective tech policy agenda for the 21st century demands conceptual clarity on two dimensions. First, what are the kinds of problems that we are most worried about? To what degree are these problems rooted in technological innovation itself or in the interaction of technology with our political-economic structures? Second, how have policymakers already attempted to govern digital technologies, and what can we learn from these past attempts? It is imperative we keep these questions top of mind so that we avoid wasting valuable political energy on legislative efforts that might sound nice but only exacerbate the very problems they purport to address.
Diagnosing the pathologies of the digital revolution
The most urgent concerns about digital technologies fall under the following categories:
Fairness, discrimination, and algorithmic bias.
Disinformation, extremism, and polarization.
Harmful content, including harassment, abuse, and hate crimes.
“Addictive” properties of smartphones, social media platforms, and online gambling.
Free speech and censorship.
Intellectual property, copyright protections, and rights to publicity and/or personality.
Data governance, including privacy and portability of data.
Digital surveillance by public and private authorities.
Many of these problems are cast as failures of content moderation, but that diagnosis just scratches the surface. The deterioration of discourse on the modern internet, including popular social media platforms, is a symptom of a deeper, structural condition: the reliance of most companies on a digital advertising business model. This business model means that companies are incentivized to engage in pervasive data collection and analysis — also known as surveillance —to track and maximize user engagement, and thus advertisers’ click-through rates. That imperative drives product and design choices from top to bottom.
It is the power of surveillance-based targeting that makes many of the aforementioned concerns, like online extremism, so worrisome. Noxious content can be pushed to the people most susceptible to its message because of the data that companies collect about them. Policies that focus on moderating content risk censoring speech without fixing the underlying data governance regime that encourages companies to push that speech into the corners where it will do the most damage.
Nor will better moderation of online content solve the larger cultural problems of which it is as much symptom as cause. Members of traditional media, such as cable news, are also responsible for the mainstreaming of extremism and disinformation.1 And, of course, we cannot forget the public commentators and politicians who legitimate and perpetuate disinformation and violent rhetoric and policies. By failing to hold accountable leaders who take illegal, illiberal, and anti-democratic actions, whether through social censure or appropriate legal and political methods policymakers have forsaken powerful remedies for refuting disinformation and extremism while rebuilding trust within the populace. Political extremism online has become incorporated into this broader fabric of partisanship.
Reforms that target online content moderation are unlikely to resolve problems driven by deeper structural issues with the business model of the modern internet or the pathologies of our politics. In fact, they can backfire by making it harder for vulnerable communities to organize while permitting the giants to comply with the letter of the law but exploit loopholes.
Consider, for example, a recent example of American politicians acting on digital technology companies: the 2018 Allow States and Victims to Fight Online Sex Trafficking Act, known as FOSTA-SESTA because it incorporated elements of another bill known as the Stop Enabling Sex Traffickers Act.
Learning from past political decisions to govern digital media
FOSTA-SESTA amended Section 230 of the 1996 Communications Decency Act, which attempted to reduce pornographic online content. While the Supreme Court struck down most of the Communications Decency Act for violating the First Amendment’s speech protections, Section 230 survived. This section protected online service providers, and now social media platforms, from liability for content users shared. In large part, it responded to the uncertainty produced by two major court decisions regarding early internet forums. Before Section 230, had they moderated comments on early discussion boards, online service providers would be treated as publishers, not distributors, of that content. Consequently, they would be liable for violations of defamation and obscenity laws. This created perverse incentives to either let user-generated content run wild or simply stop hosting such material entirely. In short, Section 230 enables online service providers to moderate content at all. Section 230 does not excuse companies from a plethora of responsibilities, such as intellectual property laws and the enforcement of federal criminal laws. But it enables them to exercise their speech rights to moderate content and host user-driven sites that billions of people find useful.
FOSTA-SESTA removed Section 230’s liability protections with regard to content that promoted sex-trafficking. However laudable its intentions, the law seems to have backfired. Since the adoption of FOSTA-SESTA in and unrelated seizure of backpage.com in April 2018, authorities have found it harder to collect evidence of sex trafficking as platforms have moved overseas and many people involved in the trade now rely on encrypted and disappearing messages on social media platforms.
Victims of sex traffickers and other sex workers must also be able to communicate online and warn each other about dangerous actors. Such protective practices might also seem like “surveillance” or monitoring, but they are survival strategies embraced by a relatively less powerful population. Scholars use the term “sousveillance” to denote the difference. A 2020 survey of sex workers’ experiences in the wake of the 2018 changes found that FOSTA-SESTA gave them “a general sense fear and paranoia” about what information they could legally and safely share online, making it harder to rely on the internet as a tool for finding community, support, and resources for verifying the trustworthiness and safety of new clients.2 While a 2023 D.C. Circuit ruling helped affirm that “sex workers, advocates for sex-workers’ rights, and other online speakers were protected from prosecution,” ambiguities about the law and thus its potential chilling effect remain.
The internet’s liberating potential is undermined less by objectionable content than by relentless surveillance and targeting of messages.
The example of FOSTA/SESTA suggests the potential limits of policy responses that prioritize reducing the presence of morally objectionable content online. Such an approach might successfully roll back undesirable content on mainstream, market-dominant, and domestic websites. But it may accomplish this while also reducing legal, less objectionable, or even morally desirable content, including that which would be produced by the very people that the policy is supposed to serve. It makes it harder for public and private actors to find reliable information about dangerous individuals and groups.
I do not believe taking seriously the criticisms of FOSTA-SESTA means we must give up on the possibility of policy solutions to problems we see online. Rather, within this story we can make out the contours of an alternative paradigm that understands digital life to be part and parcel of our broader political economy. While seeking to hold companies responsible for social harms, we must not lose sight of how ordinary individuals use the internet to empower themselves and their communities, and whether private and public policies undermine or support these capacities.
The internet’s liberating potential is undermined less by objectionable content and more by the massive data collection, analysis, and use that render those individuals and anyone to whom they are connected vulnerable to private and public forms of surveillance. Online data is fundamentally relational, so that companies can collect data about non-users. That means data you provide to private companies can be used to police and punish other people. Consider, for example, users of a popular Muslim prayer app, Muslim Pro, who felt exploited and complicit in the oppression of their peers when they found out the company’s data had wound up being shared with the American military. It is this fundamental business of data that shapes most of the choices that technology companies make, including the promotion of noxious or seemingly “addictive” content. We are likely to see these dynamics of so-called “surveillance capitalism” continue to play out as chatbot companies search for profit models, as seen in OpenAI’s turn to digital advertising.
Privacy reforms need not pursue a total ban on digital advertising; the European Union’s General Data Protection Regulation has shifted companies’ incentives away from more personally invasive behavioral targeting to contextual targeting, a trend that American privacy policy could further encourage.3 Federal privacy legislation could limit companies to collecting only the data required to offer an effective service for their consumers; empower users by making further data collection opt-in rather than the default; strengthen existing protections against discriminatory algorithmic targeting with respect to housing, employment, and banking; and prohibit state actors and police from circumventing Fourth Amendment protections against warrantless search and seizures by purchasing data from brokers. To be clear: Because challenging the status-quo data governance regime that favors companies over users would harm the bottom line of most major digital technology companies, it is an unavoidably ambitious project. But if we think we are at a revolutionary moment that will define the future of our ever-more digital political economy, then the time for ambition is now.
Kristen Collins is a Senior Fellow with the F.A. Hayek Program for Advanced Study in Philosophy, Politics, and Economics and a Senior Research Fellow at the Mercatus Center at George Mason University. Her Substack is Theory of Virtual Sentiments.
Blunt and Wolf 2020; see also Grant 2021 and Kosseff 2022.


