The precautionary principle
The virtues of focused change and the uncertainties of systemic reform.
In some quarters, social science research is in the crosshairs, targeted for being far too weak to justify evidence-based public policy interventions. But the hunt is badly misguided, confusing the inherent complexities of human behavior and social dynamics with the desire to identify “root causes” of social problems and single cause-and-effect relationships.
Take, for example, “Cause, Effect, and the Structure of the Social World,” in which law professor Megan Stevenson critiques criminal justice innovations generated from evidence derived from randomized controlled trials (RCTs) and the idea that policy can be “engineered” from experiments. Unconstrained by experience developing an RCT, Stevenson’s position sounds compelling, but it is fundamentally flawed. Experiments in criminal justice reform are not idiosyncratic ideas launched by academics trying to “engineer” the world. They are developed by practitioners and community members who have a sense of what will work from their real-world experience and who have skin in the game. The academics conducting RCTs are not “engineers;” they are social scientists evaluating the ideas of others. Stevenson cites Friedrich Hayek’s critique of social planning that occurred under socialism, where social planners foolishly thought they could engineer the complexities of markets and social order. Hayek asked for simpler approaches that were more grounded in the real world and relied on insights from common people with actual experience and expertise. This is exactly how most RCTs in criminal justice are developed.
Stevenson suggests we should abandon pilot testing programs or policies through RCTs before attempting to scale up, and embrace methods that “seek systemic reform, with all its uncertainties.” Fifty years of RCTs in criminal justice provide only a few examples of programs that reduce crime and are used to advocate for focusing on “stabilizers,” or those social forces that are difficult to change. Yet there are no articulated examples of root causes that can be changed by policy.
The complaint is an old canard. In the 1970s, a number of prominent scholars similarly argued that crime could only be reduced by addressing “root causes.” James Q. Wilson noted that suggesting crime could only be reduced by eliminating root causes was a “causal fallacy,” because most root causes are things that cannot be changed through policy.
Stevenson also critiques experimental evaluations, whether controlled or randomized, for being too narrow in scope, idiosyncratic to a given context and difficult to scale to entire populations. But experiments by design have to be narrow in scope. If one wants to establish causal evidence, it is necessary that the treatment be related to the effect observed, that the treatment preceded the effect in time and that there are no other likely explanations for the change observed other than the treatment. Though the limitation of causal inference means that the scope has to be narrow, that doesn’t mean that the results are not meaningful and cannot be scaled. In fact, one of the benefits of a well-done field experiment is that it is often manipulating a policy or practice in a realistic context that can indeed be scaled with sufficient political will.
Stevenson provides a pessimistic review of the evidence of empirical evaluations of criminal justice programs and policies. The claim is made, for example, that there was a test that combined “data across all seven” experiments in which police officers were randomly assigned to issue an arrest in misdemeanor domestic assault cases, with the assertion showing that arrest did not “have a consistent or large effect on recidivism.” It is worth pointing out that the combined test in question was for five completed experiments in Charlotte, N.C.; Colorado Springs, Colo.; Dade County, Fla.; Milwaukee, Wisc.; and, Omaha, Neb. While the combined effect on recidivism was not statistically significant, the study found that arrest on average reduced new victimization by 25%. Most policymakers and community members would consider a 25% reduction in victimization practically significant.
Stevenson also makes the claim that RCTs of hot-spot policing show that increased police presence in high-crime areas “produces small but statistically significant” decreases in reported crimes, but fails to show long-term effects. But the RCTs of hot-spot policing are typically very modest in the dosage of additional police added to a given area they measure. Moreover, most RCTs of hot-spot policing also do not change police tactics. The police are simply required to spend a few extra minutes (15-20) per shift in a given hot spot. The fact that these experiments on average show a deterrent impact of police presence on crime suggests if anything a very conservative test of police presence. In contrast, there exist a number of high-quality controlled experiments that demonstrate adding a significant number of police to high-crime areas substantially reduces crime.
Whether this approach is sustainable and preferable is another question, and one that requires further policy experimentation on how to conduct the least intrusive forms of policing that help reduce crime, but there is by any honest reckoning strong evidence that hot-spot policing yields meaningful benefits.
Not all interventions, however, lend themselves to RCTs. Some policies or programs aren’t ideal for an experiment. For example, changing state laws on the minimum age for driving does not lend itself to an experimental trial. This type of policy change, however, is easily scalable — and we have good evidence that driving age restrictions help reduce crime in the United States and Canada. Driving age restrictions do not reduce crime by producing systemic change; they simply restrict driving hours and opportunities for teenage criminal offending and victimization.
Rather than abandoning RCTs as one method to test policy or program effects, policymakers should consider evidence from experiments as incremental tests that need replication and considerations on their scalability to entire populations. Experiments are simply a method to develop a causal test of a policy or program in a specific context. Those who write responsibly about RCTs do not pretend to offer a test of a complex reality of interacting systems on scale, or the ability to forecast what will happen in the future. Other methods are needed to try to forecast what will happen when a program with evidence of working in one time period and setting is applied elsewhere.
Stevenson’s suggestion that, in lieu of sufficiently persuasive evidence that discrete policy interventions yield big benefits, we should instead embrace “systemic reform with all its uncertainties” is a classic form of Hegelian theory, in that it essentially says, “We can only know evidence of what works through major systems of change where reality will reveal itself.” Such an idea is not scientific because it cannot be falsified.
Embracing systemic change without articulating a clear approach with strong scientific evidence doesn’t abide by the precautionary principle. Any policy that has the potential to generate harm without near-scientific “certainty” about its safety triggers “the burden of proof about absence of harm … on those proposing the action.”
Abandoning discrete policy reforms that don’t meet the exceedingly high bar of being proven beyond any doubt to yield profound and lasting change would cast aside all criminal justice policies that do not address root causes of crime. Diversionary programs for low-level nonviolent offenses are exactly the type of evidence-based and targeted reforms that have shown good effectiveness in multiple settings, but that, under the arguments by Stevenson, would have to be set aside because they don’t address systemic change.
The truth is, we have good evidence of programs and policies that work to control crime and surgically allocate the use of the criminal justice system. Policymakers and the criminal justice community should build on this evidence and continue to work on programs and policies that reduce crime with the minimum necessary use of criminal sanctions.
John MacDonald is a professor of criminology and sociology at the University of Pennsylvania.