Welcome to Oz: A look behind the government curtain
Policy decisions are rarely guided by evidence.
“What is the empirical basis that policy is made on an empirical basis?”
I chuckled as a longtime colleague recently posed this question, recalling my transition from the philanthropic sector to government.
I expected that, in government, research would be used to inform decision-making as had been the case in my previous work, but what I experienced was quite different. I was like Dorothy arriving in Oz (“The Wiz” adaptation, to be clear). I expected the best and brightest subject-matter experts to be on the job, with the best thinkers in the field analyzing existing research and assessing current opportunities to propose a sound way forward — nothing short of wizardry. Unfortunately, my optimism proved unwarranted.
What I found was something akin to a tiny man behind a curtain operating a machine that represented the all-knowing wizard. Each administration I served — federal and state, Republican and Democrat — was, at the policy level, staffed primarily by administration loyalists and individuals who worked the campaign. Subject matter experts existed, but their experience did not necessarily drive policy, especially when it conflicted with special interests. More often than not, the policy options were preordained based on ideological inclinations then, after the fact, evidence would be used to justify the given interventions as “evidence-informed.” In other words, research followed policymakers’ predilections, not the other way around.
I’ve long wrestled with the fact that research and policy don’t coexist in the real world as most Americans hope they would. Myriad influences shape policy, but the belief that it is exclusively — or even primarily — influenced by research is largely unfounded. We need to look no further than the struggle to ban menthol cigarettes, our tortured history with climate science or the reluctance to regulate social media.
I saw this firsthand in my role at an Office of Management and Budget during the height of the “performance-based budgeting” craze, a government trend focused on reinforcing evidence-based practice by ensuring resources were allocated based on evidence of impact. Too often, the data collection processes had no empirical basis, and there was no validation of the results presented. The process was reminiscent of penciled numbers on the back of an envelope. It felt more designed to justify an asserted commitment to the practice of performance-based budgeting than a meaningful effort to allocate resources based on performance. Instead of being driven by any empirical results, the budgeting process was frequently driven by the loudest, most organized stakeholders, creating political winds too strong to dismiss.
This wasn’t an isolated incident. It is not unusual to see research used to justify decisions rather than to inform decision-making. When it comes to research, a central issue may be less about methodology and more the lie we are telling ourselves about its role in policymaking.
This is the light in which I read Megan Stevenson’s Boston Law Review article “Cause, Effect, and the Structure of the Social World.” It highlights some tensions in the use of research and questions whether a popular gold-standard evaluation method — randomized control trials, or RCTs — have meaningfully advanced the field of criminal justice. Stevenson questions many things, including incentives in the field of research, the real impact of the interventions that are deemed successful and the replicability of the same. What I found refreshing wasn’t so much the point she was making on RCTs but that she was encouraging a dialogue. I have long questioned whether we overstate what we know and, even more, whether the little we know actually drives policy. We need to move from using appeals of “what the evidence says” as weapons in political debate to constructive conversations around impact.
I agree with Stevenson that it is difficult to wrestle with these issues without devolving into hopelessness. Policy should be informed by the best of what we know. Still, we should also exercise a lot of humility about what we actually know and even more humility about whether that knowledge base contributes to how the sausage is made. For too long, we have overused the concept of “evidence-based practice” without a robust understanding of the very short list of interventions that qualify under the definition. We have also used the concept to preempt conversations about the need to further support emerging innovation and expand what we know using myriad forms of research methodology. Most notably, we have weaponized these research methods against communities of color. We have too often made communities of color the subject of research study without valuing their agency in creating and contributing their own innovations, including through offering them the power and resources to reshape the stabilizing forces that Stevenson argues render small interventions meaningless.
Stevenson’s article concludes with a quote from existing research that frames RCTs as a research design that “limits the focus to interventions that leave systems intact and change some element that is manipulable without doing ‘damage’ to the system.” While this in no way advances an abolitionist frame, it made me recall pointed critiques by academic Dorothy Roberts, who, in her own Northwestern Law Review article, states that “making criminal law democratic requires more than reform efforts to improve currently existing procedures and systems.”
Perhaps Dorothy and I have both had it all wrong. Maybe it was never about arriving in a gilded hall to solicit the best counsel from an all-knowing wizard. Maybe it has always been about building momentum with the movement leaders we encounter along the way to galvanize the kind of support that will truly upset stabilizing forces and usher in meaningful change. I have witnessed the use of research to influence policy, but we should not overstate how often it happens or whether research alone — without robust investments in advocacy, systems change, media and other social forces that contribute to a groundswell — is effective at persuading policymakers to enact the change we seek.
Candice C. Jones is president and CEO of the Public Welfare Foundation, an endowment fund dedicated exclusively to investing in transformative approaches to youth and adult criminal justice. She is a former White House fellow and the past director of the Illinois Department of Juvenile Justice.