Our problems run deeper than outsourcing
Sharing the right information is one thing. Reaching decisions is another.
The opening scene of
’ essay is poignant. A bunch of old hotels on the northern coast of Wales have been turned into “granny farms”—that is, facilities outsourced by the government to take care of old people. And they’ve failed to remain in business even with public subsidies, let alone to provide good care. Davies lists the horrors that are too typically associated with nursing homes and assisted living facilities. Deluged with residents who cannot advocate for themselves, they amount to little more than nightmares we want neither for ourselves nor our aged parents. And the central question becomes: How might we erect a regulatory framework that would, at long last, fix this problem?Cybernetics is Davies’ answer to this question—he wants us to develop better systems for harnessing feedback, in part by being more selective with outsourcing, such that the incentives given to caregivers are to give good care. This all makes good sense, as certainly better oversight of the whole system could potentially ensure that resources are properly expended in the right places, that wasteful spending is eliminated, and that the provision of services is brought to a higher standard. But to my reading, one crucial element was left unexamined: Who, in the end, should make decisions about the care that each resident is provisioned?
Reorganizing along cybernetic principles, we might conclude, will provide some sort of superior guidance, leading to better decisions across the system. If, for example, a certain medical intervention emerges as a much more effective treatment for some chronic condition, that wisdom needs to be lifted from the locus from which it was derived and then socialized to similar, geographically dispersed facilities that confront patients with the same challenge. Unless everyone is alerted that this certain cocktail of medicines is a much more effective way to treat patients suffering from this panoply of conditions, care teams outside the facility that has stumbled into the better intervention won’t take advantage. As a result, less effective, but perhaps more expensive, interventions will prevail elsewhere. Davies is right to decry this systemic lack of processing.
But as anyone who has had any experience with this sector knows, particularly in the field of care for the elderly, decision makers are often making choices based on factors entirely outside the realm of best practice. They might know that the best approach, in general, to alleviating the chronic pain of patients suffering a certain combination of maladies is Intervention A. But that’s not the only variable. What does the patient want? What does the family want? Can the facility deliver the intervention properly? What impacts will any given intervention have on the staff and other nearby patients? Much more goes into any individual decision than that knowledge derived from broader processing capacity—though, of course, more knowledge drawn from cybernetics might be welcome.
The question of whether we’ve neglected to harness and process the insights that might be gleaned from the complex systems that govern society requires that we grapple with a separate question—namely, who should we empower to put those insights into action? It may well be that the best practices on bridge design, or transit access, or housing affordability, could be better drawn if society invested more in the analytics that track the broader good, even beyond the market return, But knowing the best practice in bridge design is not sufficient to know that the prescribed bridge will prevail in the gauntlet of politics and decision-making that precedes selection of the final plan. Too many other factors are likely to intervene—the demands of neighbors, of environmental impacts, of building-trade interests, of those who commute and own real estate in places that are likely to be affected by the second- and third-order changes elicited by a big, new infrastructure project.
It’s surely true that often a lack of processing capacity blinds us to better options at the beginning of a decision-making gauntlet, or prevents us from identifying information that should lead us to change course along the way. But in far too many cases, today, the bigger barrier is the decision-making vector itself. Over the last half-century, Americans fearful of faraway power brokers making choices that affect their lives have erected barriers to efficacious progress—they’ve inserted additional barriers into the system so that communities aren’t bulldozed Robert Moses-style. In cybernetic terms, we might say the system became overwhelmed by feedback, but this is less a matter of too much incoming information than of too many competing decision-makers.
No one wants to return to the days when imperious figures could ignore entirely the concerns of those affected by system decisions. But if better processing might point us to better solutions, simply knowing what works would not guarantee, given today’s dynamics, that greater wisdom would prevail.
Marc J. Dunkelman is a fellow at Brown University’s Watson Institute for International and Public Affairs and a former fellow at NYU’s Marron Institute of Urban Management. During more than a decade working in politics, he worked for Democratic members of both the Senate and the House of Representatives and as a senior fellow at the Clinton Foundation. His new book, Why Nothing Works: Who Killed Progress—and How to Bring It Back, will be published February 18 by Hachette/PublicAffairs. He is also the author of The Vanishing Neighbor: The Transformation of American Community, and his work has appeared in the New York Times, Washington Post, Wall Street Journal, Atlantic, and Politico.