.


Empowering Everyday Experts

The following essay is my shortlisted (though, unfortunately not winning) essay for the Bennett Institute for Public Policy Prize 2024. It is mostly an expanded version of this post I made in late 2023. This version of the essay was published on my Substack in March 2024.

Introduction
My first job after university was as an analyst for a large organisation in Manchester. The organisation was introducing a new finance system, and it was not going well. Many employees did not understand how the system worked. They were causing errors, and I had to fix them. Unfortunately, the errors never stopped. So, I decided to investigate the problem myself.

I started by asking my fellow employees what they did not understand about the system. The usual culprits of resisting change and a lack of training came up. Yet, I realised these were more symptoms of a problem, rather than causes.

In chatting, I learned the new system was being introduced for one reason. Or, more specifically, one person. Every month, the finance director demanded a one-thousand-page summary of the organisation's financials. This report took the finance team around two-weeks to produce. The new system, supposedly, would produce the report in a single click.

Obviously, the directors thought the new system was a no-brainer. After all, it would double productivity. But to me, it was the crucial error to be solved. There was no world in which anyone was reading this report every month. Much less, using the information contained within to make worthwhile decisions. The report was wasting employees' time. And with the introduction of the new system, wasting tens of millions of pounds, too. The solution, to me, was simple: fire the finance director.

Unsurprisingly, this suggestion was ignored.

I left this job to return to university, where I came to some relevant conclusions. Firstly, that most organisations are inefficient. Secondly, that few understand why. Thirdly, that technology can help. But, fourthly, that it often does not. Over the years, I have collected stories like my own experience. In teaching policymakers and business executives, I have discovered those with similar experiences. And with advances in AI, I believe these ideas are increasingly important.

In this essay, I argue AI can improve public services by empowering the everyday experts who deliver them--our public servants. This doesn’t mean using AI to analyse or generate more information. Today, many of the problems organisations face are because of too much information. This means decision-makers often struggle to access the right information. Using AI to get the right information to the right people will allow us to unlock more of the talent in our public services. And our services will benefit, as a result.

Yet, to arrive at this argument, I must slay a dragon: the belief that more information is always better. To do so, we must understand how teleprinters work. We must learn lessons from city planning. And we must learn how to tell if a baby is healthy. Doing so, I hope to convince you that when someone knows what they're doing, most things can be ignored.

Solving Prolems Without Solutions
We live in an era of big data. Smartphones track innumerable aspects of our everyday lives. Meanwhile, satellite systems track our planet’s weather every day. Digital information is the thread which binds the modern world together.

Without information, modern AI would be impossible(Russell and Norvig, 2009). Consider a classic computer science problem, natural language processing. The question at the heart of this problem is: how can a computer understand the meaning of words? It is quite easy to tell a computer that a word exists. But think of the word 'cat.' Most likely, a cloud of associations--images, emotions, memories--popped into your head. These associations capture the semantic meaning of the word 'cat.' Philosophers might call it the cat's essence. Yet, these associations are very difficult to describe. As such, it is difficult to teach it to a computer (Minsky and Papert, 2017).

Modern AI solves this problem with data. Given enough data, an AI can calculate which words are often used together (Mikolov et al., 2013). Then, by mimicking these patterns, it can appear to understand their semantic meaning. This is what large language models like ChatGPT do (Wolfram, 2023). The key to this is a powerful idea: that with enough data, we can solve problems without knowing the solution (Joque, 2022).

Recommendation algorithms, targeted ads, and prediction models all rely on this idea. Older AI systems, known as symbolic AI, did not. Symbolic AI approaches gave computers general rules to solve a variety of problems. But most of what interests humans--like cats--cannot be adequately defined using a series of rules. As one joke goes, a barman asks a customer what a chair is. The customer responds, “it has four legs, and you can sit on it.” The barman guffaws: “I don’t see any horses in here!”

Solving problems without knowing solutions has revolutionised AI. The idea is the bedrock of our modern information society. But it requires a lot of data. Today, scholars and engineers boast of ever-larger models and ever-more variables. Some even speculate about a world of 'N = all'--a world where everyone is in the dataset (Kitchin and McArdle, 2016). As one business leader once told me at a forum on AI: more is better, all is best.

More Is Better, All Is Best
More is better, all is best is not an AI innovation. It is human psychology. Peter Drucker observes that organisations always desire more information (Drucker, 2006). This is especially true when people must make (or delay) difficult decisions. In everyday life, we like information, too. Humans dislike ambiguity and uncertainty (Ellsberg, 1961). More information can make us feel more comfortable. And when mistakes arise, a lack of information is almost always blamed as a culprit.

Recent AI innovations play on our instinctive desire for more information. Generative AI is an obvious example. It allows a HR department to draft dozens of job descriptions. It enables a PR department to play with dozens of press releases. It can generate innumerable images, videos, and PowerPoint presentations, all in an afternoon. Policymakers can generate public information posters. For every town. For every age group. For every day of the week. With generative AI, you can have any colour you want, including all of them.

Less eye-catching, though still important, is the role of AI as a data analysis tool. Large organisations--particularly in the public sector--dedicate many resources to data management. Figuring out what to do next requires expensive data scientists and policy analysts. If the organisation wants to analyse more data, it has rarely come cheap. But modern AI can easily scale to analyse as much data as one would like (LeCun, Bengio and Hinton, 2015). It may not know as much as those experts it does the work of. But if the answer is somewhere in the data that might not matter. Data analysis is becoming commodified (Zuboff, 2015). And the cost savings are rarely lost on fiscally minded politicians.

With modern AI, it is easier than ever to satisfy the more is better, all is best urge that many organisations have. But while AI proliferates information cheaply, it also risks creating a lot of waste. Firstly, money may be wasted on unnecessary data, rather than helping cash-strapped public services. Secondly, AI may be wasted on analysing unhelpful data, rather than supporting public services. To appreciate these sources of waste, let’s start with a look at city planning.

How Not to Build a Highway
Congestion is an important consideration in modern city planning. It slows down the flow of goods, annoys drivers, and worsens air quality. Congestion is bad. But solving congestion is not easy. One idea which occurs to most people is to expand the roads so there is more car capacity. Unfortunately, evidence shows this instead makes congestion worse (Duranton and Turner, 2011). Adding capacity encourages more people to drive. Often, in fact, it encourages more people to drive than the expanded road can handle. And so, congestion returns.

In economics, we call this phenomenon Jevons’ paradox. In the 1860s, more efficient coal furnaces lead people to predict demand for coal to fall. But William Stanley Jevons observed demand actually increased. The more efficient furnace allowed manufacturers to lower the prices of their goods. This increased demand for these goods, and thus demand for manufacturing inputs, like coal (Jevons, 1866).

Jevons' paradox is likely true of AI. When used for analysis, AI is reducing the costs of analysing more information. Though, not the costs of collecting information. When used generatively, AI is reducing the costs of proliferating information. Though, not the costs of navigating information. Both uses are seeing sizeable demand, and rapid adoption (Porter, 2023). Rather than analysing existing data for a lower cost, organisations are starting to collect more data (Srnicek, 2016). Rather than refining the options that decision-makers must navigate, organisations are generating more options. AI can do many things, but the belief in more is better, all is best means reducing current costs is unlikely one of them.

Regardless, AI may still produce substantial benefits. Jevons observed prices falling, output rising, and technology becoming more efficient. AI may allow us to identify previously hidden insights in our public services. Public services will still benefit, provided these new insights outweigh heightened costs. For instance, data collection costs.

But as the problem of congestion shows, Jevons' paradox isn't always good. We can collect more data because AI makes it cheap to analyse. We can explore more ideas because AI makes them cheap to generate. But collecting more data, and sifting through more generative outputs, costs money. This will be a waste if the extra data or additional ideas produce no new insights. Again, there may be valuable insights! But any successful organisation probably already does most things pretty well. Thus, rather than letting AI satisfy the urge of more is better, all is best, we should ask a simple question. Will more information lead to better decisions? If the answer is no, acting as if it is yes will just waste of money.

Most Things Can Be Ignored
It will also waste the AI technology. Consider two examples.

One of the fathers of AI, Herbert Simon, tells a story about a U.S. diplomatic office in the 1960s (Simon, 1981). The office received important communications via telegrams. Telegrams were then printed using teleprinters, and printouts given to decision-makers. But there was a problem. During a diplomatic crisis, telegrams would swamp the office. The teleprinters could only print so fast, trapping important information in a backlog. By the time the information reached decision-makers, it was significantly less useful. The office, naturally, bought more teleprinters. But the problem soon returned. They could print more, so they received more. Classic Jevons' paradox. Simon's solution was simple, and radical. He pointed out that most of information printed was never used by decision-makers. The office could eliminate the backlog, with fewer teleprinters, if they only printed what was necessary.

Back to Drucker. He tells the story of Robert McNamara's time in the Department of Defense (DoD; Drucker, 2006). A big problem facing McNamara was the U.S. military budget. Procurement costs were spiralling out of control. This wasn't just an effect of the Cold War. The army required tens of thousands of unique items. Supplying them was a complicated, and thus expensive, task. The DoD had commissioned several studies to understand the issue. Reports inches thick described how the army acquired each item. Yet, despite more information, costs continued to spiral. McNamara's solution was simple, and radical. He asked his staff to order every item by its contribution to the total cost. When this showed 4% of items accounted for 90% of costs, he told his team to ignore the other 96%. Costs soon came under control.

I love these stories. My story about the finance director falls into the same genre. They show that organisations often assume problems arise from a lack of information. But often, too much information prevents a solution from appearing. Using AI, or other technologies, to proliferate information will further bury solutions. And, perhaps worse, we will waste transformative technologies patching up bad organisational systems.

Empowering Everyday Experts
However, in each story, one still needs to figure out what information to ignore. This is where AI can make a great contribution to public services.

In the 1980s, Herbert Simon again popped up. This time, he argued we should use AI in conjunction with expert judgement (Simon, 1987a). Nurses, doctors, teachers, and others are everyday experts. They know a huge amount about services which impact everyone's life. Simon pointed out that everyday experts often have fantastic intuitions (Simon, 1987b). Experts can make very good, quick, decisions when given the right information. For instance, take the Apgar test. Developed by paediatric specialists, this test quickly assesses the health of newly-borns. Rather than running a bunch of tests, a nurse checks five physical vitals. They give each vital a score from 0 to 2. If the total score is more than 7, the baby is probably in good health (Calmes, 2015).

There are two issues with everyday expertise. Firstly, most experts aren't conscious of how they make decisions. Chess grandmasters do not consciously analyse each piece; they just know broad positions. Figuring out how everyday experts work their magic, then, is difficult. Secondly, experts can make mistakes when over-faced with unnecessary information. Show a chess grandmaster a board state that would never occur in a game, and they'll struggle to find the best move. In the era of big data, experts must spend more and more time blocking out noise which others think are signals (Simon, 1981).

Simon proposed to use AI as an expert support system to solve these problems (Simon, 1987a). By comparing expert decisions to available information, AI can learn which information matters. Then, AI can filter out the noise, leaving everyday experts with the right information. In some areas of public service, this use of AI is already happening. For instance, there is a lot of public health research. Too much, in fact, for policymakers to ever read, let alone synthesise. Recently, researchers have built an AI system trained on all this health research (Aonghusa and Miche, 2020). Policymakers can then specify policy relevant details. For instance, the setting of the policy intervention. Then, the AI suggests a range of approaches. This includes estimates of each approach's likelihood of success. Expert policymakers then decide what to do. What matters here is not that AI analyses more information. Rather, that it acts as a tool to sift through the excess that experts already face, allowing them to focus on the decisions that matter.

By empowering everyday experts with AI, we can improve our public services. It would give our public servants the tools they need to do what they do best. It would reaffirm our trust and pride in their expertise. And it would keep our public servants in control of the AI, rather than letting an AI direct them. Thus, it enshrines public service accountability.

AI is impressive. It creates opportunities that, only recently, few imagined. But there is a huge amount of talent within our public services. Using AI to empower these everyday experts is essential. While the potential for AI to expand the information available to us is interesting, it is not a panacea. Sometimes, we will discover valuable insights. Often, though, we'll waste money and technology. The answer to the question 'how can AI be implemented to improve public services?' is simple, but radical.

back .