Raising the bar: Ending an era of low expectations
One of the founders of Arnold Ventures on why government needs better information
Social change is hard to engineer. That is the premise of Megan Stevenson’s important recent article and one with which I agree.
My wife, Laura, and I came to a similar conclusion more than 15 years ago when we embarked on our philanthropic journey. At that time, we fully intended to be passive investors. We set out to identify promising programs with a clear evidence base and large potential impact, and to write a few large checks. What started out as a straightforward task turned out to be much more difficult than imagined.
We repeatedly found programs, like universal prekindergarten, claiming to have strong evidence of success. But a deeper dive into the research often gave a more nuanced and qualified view. The literature on the two most prominent programs had small sample sizes. Or they were decades old. Or the randomization was questionable. Or there were questions about attrition and blinding. Further, even if a particular intervention was effective, the ability to scale was limited given the cost of the extensive wraparound services.
That initial exercise yielded three observations that formed the core of what is now Arnold Ventures. First, despite the way the term “evidence-based” is often thrown around in public policy conversations, the quantity and quality of academic research used to justify public policy or argue for additional public funding is actually quite limited. Governments at all levels act on imperfect information, and as such we all risk suboptimal or detrimental outcomes from uninformed policies. Improving research methodologies and funding high-quality program evaluation replications would therefore become a major focus area of our budding foundation.
Second, social programs that had significant positive impacts on a large number of people were likely already part of the fabric of society or had real limits to scaling. Social entrepreneurs have piloted many programs over the years. Those with an obvious, material positive impact were likely already adopted. That created doubts about whether we could find programs with significant positive effects on a large population and the ability to scale but had yet to do so.
Systems are already driven by a set of rules and incentives, often guided not by an evidence base, but by the power of special interests and political expediency.
Third, focusing on policy that governs the existing systems is generally a more scalable and sustainable lever than instituting and testing individual programs. The rules and incentives of a system drive the behavior of that system. And every system in which the government is involved is, by necessity, engineered. The question is not whether to create rules that drive public systems. Systems are already driven by a set of rules and incentives, often guided not by an evidence base, but by the power of special interests and political expediency. Put another way, it’s wiser to spend our time adjusting the way complex systems work than making them ever more complex.
With these realizations, we decided to focus on using high-quality evidence to support policymakers as they design the rules of our most important systems. Many of these questions don’t have clear answers, but the policies underlying them have a profound impact on peoples’ lives.
Take, for instance, the decision at the core of the criminal justice system, the ability of the state to limit one’s freedom and constitutional rights through incarceration. There is no set answer as to when and how to use this power. There is no randomized controlled trial that says exactly what the consequence should be for every offense for every background. But that does not mean that evidence has no value. On the contrary, the American federalist system has created scores of natural experiments. There are jurisdictions with harsh sentencing decisions and others that are more lenient. Studying the mechanisms of cause and effect among this differentiation has the potential for enormous value.
Society can either assume the current rules governing the criminal justice system are optimal or try to use learnings to improve them. Each legislative session, dozens of bills are introduced proposing to modify the rules, each with the intent to improve outcomes. But as Stevenson rightly points out, implementing a single intervention (even one that is backed by rigorous evidence) can have unintended effects given the complexity of the system as a whole.
This is precisely why rigorous evaluation is critical. Absent a commitment to honest ex post facto evaluation — and that’s exceedingly rare in today’s politics — we risk inflicting harm on the very communities that we seek to serve.
The healthcare industry shows the downside of relying on instinct to make complex and far-reaching policy decisions. Understanding the effect of an intervention in a complex system is difficult without a rigorous evaluation. Doctors have been known to practice a standard of care for decades that is later shown to be of no benefit to patients. For instance, doctors recommended calcium administration following cardiac arrest for decades based on both theory and anecdote. But when rigorously evaluated, calcium was found to have no benefit and likely caused harm. Given that medicine has a vital impact on individuals, it was an early adopter of evidence-based practices, with thousands of research studies conducted each year in the search to bring the field closer to truth.
The criminal justice system has a similar impact on people’s lives. Every aspect, including the decision of arrest, pretrial detention, sentencing, probation and parole, and reentry from prison materially affects the individual and the community. Policymakers must have the best information available when making these decisions, lest they create worse outcomes.
— For more on how research has gotten better — and the challenge of scaling strong results — see Jennifer Doleac’s essay, “Fixing the research to policy pipeline.” —
Skeptics like Stevenson rightly point out that weak or flawed research has driven suboptimal, and even harmful, decisions in the past. However, that is not unavoidable. The problem is that for decades, the bar on quality — set by policymakers, governments and even philanthropy — has been too low. Funders, largely the federal government, often did not demand rigorous studies or provide the funding necessary to do so.
By focusing her review on studies completed prior to 2006, Stevenson’s conclusions are based on the evidence from a different era of low expectations. The good news is that funders are now more discerning, and the quality of research has risen. New funders have actively prodded the field to raise the bar, and I am proud Arnold Ventures is one of them.
Implicit in Stevenson’s assertions is that it is virtually impossible to improve or alter already engineered systems that affect the lives of people. I disagree with that notion. We must commit to learning and evaluation — especially in vital and complex systems like criminal justice. Policymakers must have better information to make critical decisions.
Evidence-based policymaking is hard. The path to change is not linear. It is often incremental, laden with failures and polluted by the impurities of politics and imperfect data. But abandoning our fealty to evidence will not drive better outcomes.
John Arnold is co-founder of Arnold Ventures LLC. Arnold Ventures is a financial supporter of the Niskanen Center.