Taming the unaccountability machine
“Public choice cybernetics” for the 21st century.
They used to call it “granny farming.”
In pubs and golf clubs up and down the North Wales coast in the early 1990s, that was the joke made about the entrepreneurial practice of buying up an old hotel, converting it into a retirement home, and then filling it up all year round with widows from Liverpool and Manchester who treasured memories of childhood holidays in Rhyl or Prestatyn. Reforms to the social care system had given local authorities an obligation to support service provision through the “independent” sector, so it looked like free money.
It ended badly, of course. The initial reason I got interested in economics was that I wanted to understand how the sleepy seaside resort I grew up in had become so horribly blighted by bankrupt shells of buildings owned by bankrupt shells of companies. Trying to understand exactly how and why the outsourcing of government responsibilities can go so wrong has taken me a couple of decades. And in all that time, not a single year has gone by without at least one major scandal relating to a granny farm somewhere in Britain or America.
The social care system — known in America as the “independent living” and “nursing home” system — seems to be wholly pathological, as the Covid-19 crisis brought into the open. Care homes ought to be among the most closely supervised and regulated entities in society, as they are intimately responsible for some of the most vulnerable people. But in fact, oversight seems to be all but impossible. Not only are the authorities generally quite bad at proactively detecting mismanagement and dangerous practices, they are often surprisingly bad at responding to complaints when they are made. The business of trying to find anyone who is prepared to admit to being responsible for doing anything about an urgent, even life-threatening issue is confusing and frustrating, as many people with elderly relatives know.
As with social care, so with many other aspects of public life. In the 1970s, before the Thatcher and Reagan years, people used to complain that government bureaucracy was monolithic, inefficient, and unaccountable. Today we increasingly find ourselves dealing with a strange network of bureaucrats, agencies, and private sector providers, creating a system which is fragmented, dysfunctional, and unaccountable. The outsourcing and delegation of many key functions appear to have brought neither cost savings nor improvements in delivery.
It is time to take a scientific approach.
What appears to be lacking are the twin quantities of “accountability” and “capacity” – the ability of the state to respond to feedback and make changes in response to it. Past attempts to increase “efficiency” as measured by annual budget requirements appear to have had the effect of reducing the actual ability to perform tasks. So if we are going to start to address our problems, and regain a meaningful concept of “efficiency”, we have to think seriously and rigorously about these quantities.
People used to complain that government bureaucracy was monolithic, inefficient, and unaccountable. Today we have a strange network of bureaucrats, agencies, and private sector providers that is fragmented, dysfunctional — and unaccountable.
And perhaps more importantly, the “capacity” we are talking about here is specifically management capacity. There is no very complicated or intractable problem of the state’s ability to command real resources or physical objects. The problems we keep facing relate to its ability to process information – to reliably make good decisions based on the state of the world in which it finds itself.
This is the most glaring symptom afflicting the modern state/outsourcing/consultancy nexus.[1] Communication across organizational boundaries is difficult. Once a function is moved out of the government office building and into somebody else’s premises, it is intrinsically more difficult for the state to know about it. Can we find some way of putting this intuition into more rigorous terms — of going from clinical observation to diagnosis?
The theory of information in the context of problems of management and control used to be called “cybernetics.” I will argue that cybernetics offers a path out of our current malaise, and that principles drawn from information theory are more suited to the task of describing today’s problems than the mathematical models of economics. But to understand why, it’s important to think through the intellectual traditions that brought us to this juncture.
Public choice, then and now
Although grounded in Chicago price theory and neoclassical economics, the original public choice scholars such as William Niskanen made only limited use of its technical tools. Rather, they took the methods which had worked so well in describing the world of price, quantity, production, consumption, and trade, and extracted a small set of “stylized facts” which could be applied to political economy and governance. It could reasonably be argued that a great deal of twentieth-century work on public choice is based on no more than three core ideas:
1. Incentives matter and must be made compatible; decision makers tend to make choices which maximize quantities valuable to them.
2. The principle of opportunity cost must be used for measuring success or failure; outputs need to be placed on a scale where they can be compared not only with the total amount of inputs, but the foregone output which could have been produced if the inputs had been put to alternative use.
3. Outcomes are to be judged systemically and in equilibrium; the second-order and longer-term consequences of a decision may be (and often are) more important than the direct and intended consequences.
Starting from this position, how much further information does one really need to make a series of educated guesses about whether, say, military procurement budgets are likely to be well spent? About how a suburban police department might end up owning an armored personnel carrier? Or about what kinds of federal infrastructure spending need the most careful oversight[DD1] , or whether skepticism is appropriate about how rigorous the licensing requirements need to be for hairdressers? Genuinely, not much.
Simply refusing to take things on trust, and asking what might go into a decision other than selfless and informed wisdom, often gets quick results. Mark Twain’s remark on science — “one gets such wholesale returns of conjecture out of such a trifling investment of fact” is surprisingly close to the truth when it comes to the relationship between a few universal axioms of behavior and understanding a large system. But there are limits to how far one can progress in this way. Following the simple, axiomatic path brought us to where we are today; shrunken state capacity, widespread outsourcing, and private provision. And it hasn’t worked, so there must have been a wrong step somewhere along the way.
Respect for the problem
We started by identifying a problem – that the modern, decentralized, and outsourced bureaucracy of the new public state is a disaster in terms of accountability while failing to deliver the goods in terms of efficiency. It is certainly attractive to think that all our bureaucratic messes might be explicable with respect to a few universal truisms about incentives and measurement.
But it is usually best to start with an attitude of respect for the problem. In large and complicated systems, something which presents as a failure today is very likely to have looked like a solution when it was first established. The outsourcing/consultancy nexus is not an exception to this general rule.
One of the original problems to which outsourcing was the solution was that of empire-building and excess bureaucracy. A key problem which the public choice school predicted and empirically verified was that public sector managers tended to act as “budget-maximizers;” their salary and prestige was largely related to the amount of economic resources and personnel under their control. Consequently, there was permanent pressure to expand the provision of services, combined with perverse incentives to maximize rather than minimize their costs.
In some cases, budget maximization was quite blatant – the Army Corps of Engineers was rebuked by the Inspector General in 2000 after it was found to have put pressure on employees to manipulate cost/benefit analyses so that large projects could go ahead. In other cases, there might have been no smoking gun, but simple comparisons of the headcount in public sector bodies compared to their rough private sector equivalents performing similar tasks (such as buildings maintenance) provided quite strong circumstantial evidence. Public healthcare and old-age homes were certainly among these areas where suspicion used to fall.
A key preventive measure against this tendency was to introduce some sort of market test for the cost of delivering services, by requiring them to be bought from the private sector by competitive tender. Learning-by-doing and efficiency gains were thought to belong in the private sector, where they could be applied and extended across the economy. The ideal model of public service was one in which the state was responsible for “steering rather than rowing” – setting policy in line with its democratic mandate, but not directly delivering goods and services.
How did this solution turn into a problem? Basically, because the world became more complex.
Commodity versus complexity
Every science works by creating a representation of the world, focusing on some salient details and ignoring others. Economics (and particularly the kind of price theory which was the basis of the public choice school) analyses the world in terms of markets for commodities. A commodity, in this sense, should be taken to mean a standardized good or service – one for which quantity and price could be considered sufficient statistics. Think of the classic supply/demand diagram from introductory textbooks. It has two axes, with quantity on the vertical and price on the horizontal – that’s why demand curves slope downward. It doesn’t have other orthogonal axes reflecting every other possible property of a real-life good or service, like quality, consistency of supply, customer service and so on, because that would make things impossible.
It is easy for a public sector organization to purchase automobiles and janitorial services, and it is also easy for an outside party to audit whether they are overpaying or purchasing too much. These kinds of transactions are close to the commodity model; they do not have an unmanageable quantity of characteristics and interactions.
It is more difficult for a public sector organization to enter into a complicated outsourcing contract and be sure it has got good value. A management consulting appointment, by definition, involves the sale of a service where the client can’t specify the details in advance and then negotiate on price – if they knew the details, they wouldn’t need the consultant. A service contract might be specified in detail at the beginning of its term, but the world is going to change, and provisions of the contract which made sense at the time will become irrelevant and even counterproductive.
A management consulting appointment, by definition, involves the sale of a service where the client can’t specify the details in advance and then negotiate on price – if they knew the details, they wouldn’t need the consultant.
Even this is not impossible, though. Outsourcing contracts work if there are sufficient management resources within the organization, and if attention is paid to the arrangements for reporting, monitoring, and information exchange. The model of state commissioning for external management services was what put people on the moon, after all. But it creates a significantly more difficult auditing problem.
The political theorist Patrick Dunleavy made a distinction between “delivery agencies,” which spend a majority of their budget on their own staff and on the direct provision of services, and “contract agencies,” which have a relatively small budget for their own operations, compared to the much larger sums they spend on commissioning things. Converting a delivery agency to a contract agency makes the majority of its budget more transparent, as it now largely consists of goods and services bought at market price by competitive tender. But the remaining central component – the part of the agency which decides which goods and services to buy - is extremely difficult to assess. In order to tell whether the administration of an outsourced function is being carried out efficiently, you would need to understand not just the public sector agency being audited, but the underlying activity it has commissioned.
Effectively, what has happened here is that in the early era of public choice economics, the majority of the costs located within the bureaucracy were associated with commodities, as they related to delivery agencies. The public choice analysis suggested that these costs were likely to have been padded, and that market-testing, competitive tender, and outsourcing could reduce them and improve efficiency. But this very process undermined the basis of the original analysis. In a system largely made up of contract agencies, the majority of the costs which remain under bureaucratic control relate not to commodity outputs, but to the cognitive and information-processing resources needed to manage complex relationships and systems.
While these costs were “padded”, it was easy not to notice that there was a potential problem with the cure; an overstaffed education or public housing department has plenty of spare capacity to administer its contracts. But the nature of the monitoring problem is that it is hard to know when you have cut a cognitive function too far. And the nature of the world is that as things grow, they become more complicated, and their overhead and administrative costs need to grow more than proportionately. The result is the gap in capacity that we are talking about.
Not learning by not doing
Public sector agencies engaging with the management consulting and outsourcing industries have a nest of information problems. They are required to buy an underspecified non-commodity, making it difficult to know whether the price and quantity negotiated are fair or reasonable. As time goes on, information about the true nature of their needs and what they have purchased is revealed. But the information is produced outside their organizational boundaries. There is an ongoing problem of “not learning by not doing”; the public sector’s understanding degrades, which affects its ability to successfully manage the consultancy relationship, never mind the actual service. In principle, this could be overcome by placing more resources into the administrative functions, but this is always politically unattractive; it would mean increasing costs or taking resources away from the “front line.” And the original lessons of public choice economics still create a sense that this is the wrong thing to do; bureaucracies will always claim to need more, and nobody wants to be taken for a sucker.
Ironically, the cognitive deficit and its consequences tend to themselves increase the economic costs. Too often, the professional services industry’s practice in public sector work is to underbid for the initial contract, in the knowledge that future “changes of specification” can be billed at the full hourly rate. Effectively, the side of the contract which is better placed to understand how things might develop in future is rewarded for every time the agreement turns out to be inadequate to that uncertainty.
The unaccountability machine
The final irony is that unaccountability itself becomes a solution to these difficulties, rather than a problem to solve. Ignorance is the information processing system of last resort, and when all else fails, the way that decision-making entities deal with feedback that they cannot otherwise handle is to make it impossible to communicate. Rather than adapting to changing needs or correcting mistakes, consulting and outsourcing arrangements are turned into “accountability sinks,” closing down the normal channels of democratic feedback and replacing them with a blank contractual wall. As well as being psychologically (and eventually, politically) intolerable to voters, this tends to exacerbate all the other cognitive deficits – when the feedback link is broken between the consequences of decisions and the body responsible for making them, learning is almost impossible.
In other words, the true problem of public services is unchanged from the days of Niskanen; indeed, it is barely changed from when Niccolo di Machiavelli wrote that “a prince who is not himself wise can never be well-advised.” It might be called “the second-guessing problem”, the difficulty of establishing accountability and control in a system where the people who are meant to be managed have more information than you do. Methods for dealing with the second-guessing problem change; what they all have in common are that they are systems for reducing the amount of information to be consistent with the bandwidth available to process it – literally, to “make things manageable”, while preserving a sufficiently accurate representation of the system so that the control decisions made will correspond to workable solutions in the real world.
Control and communication
Recognizing that this is fundamentally a problem of information processing and capacity gives us a starting point. When we were dealing with purchasing commodities, public choice economics was an approach suited to the problem. As we begin to consider wider questions of capacity and management, we need a different basis for our axioms.
In fact, such a basis exists, and it goes by the rather awkward appellation of “cybernetics.” Although the word “cybernetic” has experienced a great deal of semantic drift in the internet age, it originally meant neither more nor less than “information theory, applied to problems of control.” At the same time that Claude Shannon published his “Mathematical Theory of Communication” in the Bell Labs journal, Norbert Wiener was working on exactly the same mathematics, later published in his book “Cybernetics: Or, Control and Communication in the Animal and Machine.”[2] If we remember that “capacity” and “management” both really mean neither more nor less than “decision making,” then this mathematical similarity might become a little more intuitively obvious – however complex the system, managing it is at its root an information processing task.
So, can we dive into the pit of partial differential equations that cover the pages of Wiener’s book (or those in “systems thinking”, “operations research,” and related engineering disciplines), and resurface with a short list of useful principles to shape our thinking about modern governance and management? I think we can. Borrowing heavily from the work of British management scientist Stafford Beer (dubbed the “father of management cybernetics” by Wiener), I would suggest that a reasonable amount of progress can be made with three axioms:
1. For any system to remain stable, the regulatory part of the system must have at least as much capability to process information as the relevant environment has to generate it. This is known as the “principle of sufficient variety” in the literature.
2. The flow of information can be managed in two ways – by “attenuating” or selectively throwing away information from the environment, or by “amplifying” the capacity of the management system. Particular attention needs to be paid to the points where information needs to cross a boundary between organizations or levels of management; if no special measures are taken, these will act as quite restrictive attenuating filters.
3. But what counts as “information”? Basically “any change in the environment which is relevant to the purpose of the system.” This is the basis of Stafford Beer’s greatly misunderstood dictum that “the purpose of a system is what it does” (or acronymically, POSIWID). Rather than a boring piece of cynicism, the POSIWID principle states that you need to look analytically not to the intention of the designers or managers, but to the information which the system responds to – this might be a compromise with other management systems or the environment, or it might be a simple design flaw. The system is not conscious and so does not have “incentives,” but it has consistent patterns of response to stimuli.
Added to these, we should slightly adjust the economists’ concept of “general equilibrium.” It is still important to understand that systems communicate with one another and that each system is part of the environment for those it is linked to. But we need to replace the concept of “equilibrium,” in the sense of a low-energy state to which the overall system will naturally tend, with that of “homeostasis.” This concept, found in both engineering and biology, is one of a state that is stable, like equilibrium, but one which requires a continued input of energy to maintain. (For example, if your body temperature is a constant, then it’s either being maintained homeostatically, or it’s in equilibrium with the ambient air. The first is definitely better than the second).
Axioms and application
I hope that nobody was expecting to conclude with a short and concise formula, derived from three paragraphs’ worth of axioms about cybernetics, which provided an instant and universal solution to the problems of outsourcing and state capacity. That would be like hoping that a few maxims of price theory could form the basis for an entire philosophy of government, a mistake that almost nobody makes any more. But simply giving a name to the problems often helps the search for the solution; in many ways, the secret of systems analysis is to just do the same analysis you were always going to do, but with your eyes open for problems of information flow and feedback.
For example, considering the relationship between the state and the consulting/outsourcing nexus as a “cybernetic” problem – one of information and control – gives us a way to understand and think about the tradeoffs in the design of contractual arrangements. We know that we are looking for a homeostatic state rather than an equilibrium; the world will keep changing and the relationship will require a constant input of management time and energy to preserve it. This is a cost that needs to be budgeted for; we cannot declare a saving if the efficiency gain is being bought by creating a solution that only works if things stay the same.
Looked at in this way, it’s very easy to see what’s wrong with the model where initial contracts are competitively tendered but variations charged for at full price. The relevant information set will change, and will do so in unforeseen ways (it’s almost a contradiction in terms to suggest that one can anticipate changes in the things one needs to know). The POSIWID principle tells us that when we define the information flows in a system and their links to action, then we are, whether we like it or not, setting the purpose of that system. If the world changes so that those linkages of cause and effect no longer generate the results we want, then that isn’t something the system can respond to – its purpose and ours are no longer the same. Flexibility in contractual arrangements is expensive, but rigidity should not be counted as a cost saving.
When we define the information flows in a system and their links to action, then we are, whether we like it or not, setting the purpose of that system. If the world changes so that those linkages don’t do what we want, then its purpose and ours are no longer the same.
The relationship also needs to consider and budget for communication channels. If the informational balance sheet is not brought into balance by conscious effort and design, it will balance itself by using the information processing system of last resort: things will be missed. Good contracting thus requires an understanding of what variability can be absorbed by the external partner and what needs to be passed on to the public sector for decision-making, matched by arrangements that ensure this information actually is transmitted, and that resources are available to make sure it arrives in time to be useful, and in a form where it can be processed and understood.
Which in turn requires consideration of what happens to those deadly pieces of information which both must be dealt with and cannot be dealt with. “Crisis” is one of the names we give to the relatively rare and sudden spikes in environmental volatility which overwhelm the systems meant to control them. In a well-designed organization (or any well-regulated system), these are not fed into the processor of last resort (that is, the rubbish bin). When something happens which is either beyond the technical capacity of the responsible party to understand, or requires decisions beyond its authority, there needs to be a fail-safe channel to pass the problem on to a higher level where it can be dealt with. And in turn, this contingency plan requires means to ensure that the information arrives in time and in a form that is legible to the higher level.
In fact, a generalization of this last point is the key to taming the unaccountability machine. Unaccountability in bureaucracies is a specifically cybernetic problem, in that it is a failure to transmit and process some important kinds of information – it is much more a flaw of engineering than a moral failing. What is bad about it is a broken feedback link – there is no way for news about the consequences of decisions to be transmitted to the person or system that made them, or at least not in time to be useful or in a form that can be the basis for action.
Customers and users have one huge cognitive advantage over all other levels of our modern system, which is that they live in the real world.
Every system needs these kind of feedback links which connect the very bottom levels to the very top, with all intermediate levels included. One might think of the red handle in a train driver’s cabin, which is not only a physical brake, but also an organizational tool, because when pulled, it sets off a process of rewriting the timetable for that day. If the handle simply stopped the train, but did not result in any further action in the head office, disaster would obviously result.
And in so far as the simple axioms of management cybernetics can give a wider diagnosis of systemic problems, this might be it. The contractual arrangements we have tended to adopt between public sector agencies and their commercial partners in the consulting/outsourcing industry have generally been designed for a world of commodities and equilibrium, and so they have not sufficiently considered the very-far-out-of-equilibrium events which require emergency feedback to the highest levels. Unfortunately, since the government is very big, rare events are happening to it all the time.
It is impossible to allow for every possibility. We are always attenuating, ignoring things and throwing away information, simply because to do otherwise would leave us overwhelmed. Literally, the world has to be made manageable. But for this very reason, all of our systems need to be designed so as to allow feedback from the people at the bottom of the heap – the users of services, the customers, the great mass that might be called “the decided-upon.”
Customers and users have one huge cognitive advantage over all other levels of our modern system, which is that they live in the real world, rather than a representation or model of that world made out of standardised reports and collated data points. If we want to make governance systems which are viable – able to maintain integrity and stability in response to problems not anticipated at the time of their design – we need to always ensure that there are ways for their perspective to be communicated. Otherwise, we are destined to gradually drift away from reality without noticing it, until catastrophe results.
Dan Davies is a former investment banker and economist who has worked at the Bank of England and as a specialist in bank capital securities and financial crises. He is the author so far of Lying For Money: How Legendary Frauds Reveal the Workings of the World and The Unaccountability Machine: Why Big Systems Make Terrible Decisions and How the World Lost Its Mind. His work occasionally appears in the Financial Times and other publications. Subscribe to his newsletter below.
[1] For current purposes, these two industries and their relation to the public sector can be considered together, although in many other ways they are quite different. As Mariana Mazzucato and Rosie Collington point out in “The Big Con”, when a catering service is outsourced, the people actually boiling the eggs are often the same staff who did the job previously and doing so in the same place at the same time – what the public body has actually outsourced is the management of its catering.
[2] The history of Shannon’s information theory and how it shaped the Information Age and the digital computer is the subject of James Gleick’s classic “The Information.” In “The Unaccountability Machine”, I try to trace some of the other half of the story and its slightly tragic failure to develop a rigorous basis for management science.