.


We're Probably Using AI Wrong

Introduction
Recent advances in artificial intelligence (AI) have generated much excitement. The promise of using AI to study and support human decision-making has received particular attention (Sourati and Evans, 2023). One application in this area is using AI to support humans in making effective use of information within information-rich environments (Mills, Costa and Sunstein, 2023).

Decision-makers often face more information than they can effectively synthesise (Sharot and Sunstein, 2020). AI may be able to support decision-makers in this task. However, this is a wholly technical solution to a problem that often has a behavioural cause. Without a behavioural approach to AI applications, these technologies are unlikely to realise substantial economic and social benefits for society in the short to medium term, owing to factors such as Jevon's paradox.

The risk os mis-deploying AI in everyday decision-making is a more substantial risk than those pervading popular culture today. Benefits will arise from AI supporting better decision-making, not intensifying current practices.

AI in Information-Rich Environments
The suggestion that AI can support human decision-making through information management is not new (Simon, 1987). Within marketing, "exogenous cognition" describes technologies which perform cognitive work for decision-makers (Smith et al., 2021). Similarly, "choice engines" are discussed in some behavioural economics literature (Johnson, 2021). Both terms describe technologies such as AI-powered recommendation algorithms which assist individuals in information-rich environments by synthesising all available information and presenting a more human-friendly subset. In policymaking, research suggests AI can be used to synthesise large corpora of behavioural health research to predict the effectiveness of interventions, and support policymaker decision-making (Aonghusa and Michie, 2020). Recent advances in AI, particularly large language models (LLMs) extend accessibility further, allowing humans to query enormous datasets using just natural language.

As such, AI may allow humans to synthesise more information into their decisions than has previously been possible. This will undoubtedly bring some benefits in instances where more information leads to substantially improved outcomes. Yet, it is unwise to assume that the effective solution to problems arising from information-rich environments will be technological. Such an assumption overlooks the question of why environments are information-rich to begin with. This is to say, the role of human behaviour and decision-making in creating information-rich environments.

Herbert Simon was both a father of behavioural science and artificial intelligence. His book The Sciences of the Artificial recounts an information management experience within a US diplomatic office (Simon, 1981, p. 166). The office received a deluge of information during international incidents. Decision-makers required prompt receipt of information. Yet, the teleprinters used by the office could not print fast enough to keep up with incoming messages. As a result, large informational backlogs formed during periods of crisis, undermining effective decision-making.

The solution implemented by the office was to install more teleprinters. These machines could operate in parallel, increasing the amount of information available at any given moment. Yet, Simon recalls that decision-makers did not use most of the information printed. Thus, the most efficient solution was not technological, but behavioural. The office should have investigated what information decision-makers actually used. Then, they should have only printed that information. Ignoring this, the office induced costs by buying more teleprinters. They also suffered opportunity costs by wasting those they already had.

AI technologies may support humans by allowing for more effective use of information that is currently unusuable, or at least unnavigable. But using AI to make use of all available information may only appear to be an effective solution to managing vast quantities of information when only a technical view of the problem is taken. A superior solution is likely to be using AI to discover what information is actually used by people in various domains, and designing systems to provide this useful information, while ceasing to collect information that is irrelevant to decision-making (Simon, 2000). This would make environments more informationally navigable, while realising economic benefits to organisations by reducing the costs of information collection.

Alas, a substantial risk is that the reverse will ultimately come to pass--that the capacities to analyse ever more information as a result of AI spurs the collection of even more information than is currently used (Beer, 1979). If the costs of communication technologies had remained high, the diplomatic office may have arrived at a behavioural solution to the problem of excessive information. For instance, setting rules on how to prioritise what information to send. Yet, the declining costs of technology made the apparent technological 'solution' preferable (Simon, 1981). It is reasonable to suggest that as AI technologies become cheaper and more accessible, beyond the recent revolution in accessibility which has been observed, individuals and organisations will simply increase the amount of information used in decision-making, rather than responding to the behavioural factors which create information-rich environments, and thus challenges, in the first instance.

In economics, such a problem is known as Jevons' paradox. The paradox arises when a technology enables a resource to be used more efficiently. This lowers the total cost, and thus price, of goods produced using said resource. One expectation of greater efficiency is a fall in demand for the resource. Yet, in some instances, the fall in the price of related goods raises demand for those goods, and thus the resource, beyond the original level of demand (Jevons, 1866). The most famous example of Jevons' paradox is road congestion and highways. Economists have observed that increasing the size of highways to ease congestion is often ineffective (Duranton and Turner, 2011). Instead, expanded highways encourage more people to drive. This leads to more congestion, not less.

The parallel between too much traffic on roads, and too much information in decision-making, need not be stressed. The immediate emphasis should instead fall on the revelation that the superior solution is often to use technology to change behaviours which cause problems of excess, rather than to accomodate said behaviours.

AI and Effective Decision-Making
The ambition of AI in human decision-making should be to use 'less but better' information, rather than 'more but worse.' Realising such an ambition requires confronting individual behaviour, in several ways.

Early scholars of 'operational research' investigated problems like that of the diplomatic office (Simon, 1981). These thinkers often found that those who identify problems do not consider themselves part of the problem (Beer, 1979). Those who are reluctant to change individual and organisational decision-making processes are likely to favour external (technical) solutions to information management over internal (behavioural) solutions. If this reluctance is widespread, 'more but worse' approaches to AI are likely to dominate 'less but better' approaches. Yet, this is a broad challenge relating to organisational change generally (Simon, 2000) and could be invoked as much in a discussion of the steam engine or electric light as it could in a discussion of AI.

To be somewhat more specific, it may be worthwhile to regard AI applications in decision-making as either serving as a floodlight, or a spotlight. When decision-makers use AI to analyse more information, they treat AI as a floodlight. One may illuminate the needle in the haystack. But one wastes much energy illuminating everything else in the process. This is a 'more but worse' approach to information management. When decision-makers use AI to identify what information they actually need, AI becomes a spotlight, illuminating only the needle, and saving energy in the process. This is a 'less but better' approach. The conceptual distincton here is behavioural. The floodlight approach assumes the problems is wholly technical--a lack of light rather than a lack of focus. The spotlight approach recognises the role of human behaviour--a lack of focus, rather than a lack of light. When one faces too much information, one may simply need less information, not more computation (Simon, 1981).

This is not to suppose that using AI to identify relevant from irrelevant information, in terms of actual decision-making, will lead to desirable outcomes. For instance, AI has been used to investigate the information that judges use when making bail decisions (Ludwig and Mullainathan, 2022). It has been found that a defendant's mugshot significantly predicts a judge's decision, implying that judges rely signficiantly on the physical appearance of a defendant, and less on other information. While an interesting result which may lead one to conclude judges need only be given a defendant's mugshot when making a judgement, this conclusion is wholly at odds with most values concerning criminal justice.

In many instances, using AI as a spotlight to deterine the information that decision-makers actually use may, simultaneously, reveal decision-making processes which are not conducive to the standards of an organisation, social values, or the objectives of individuals themselves (Sunstein, 2023). This point could be construed as a further benefit of a 'less but better' approach to AI, rather than a 'more but worse' approach--the latter does not reveal such behavioural patterns (Mills, Costa and Sunstein, 2023). Indeed, it is. But the argument is made here to emphasise that the benefits of a 'less but better' approach largely derive from the cost savings of not collecting 'irrelevant' information, and from reduced opportunity costs. A 'less but better' approach does not eliminate value-judgements about how information ought to be used in decision-making.

back .