.


Are Computers Actually Useful?

I have recently been thinking a lot about computers. One reason is that technologies like the computer, the internet, and electricity are often brought up in the AI productivity debate. The argument goes that because these technologies have transformed society, naysayers about AI are likely to be wrong, as there were naysayers about these technologies, too. This is obviously my simplification, but it's not too far from the characterisations I have experienced in some discussions. And one problem with it--which is not a product of my simplification--is that it is very easy to confuse transformational impact with productive impact, and I think this is often done.

As above, I've been thinking a lot about computers, so I'll use that as an example. Most of us assume the computer has made our societies much more productive. But there are certainly some arguments to the contrary. In the 1990s, Thomas Landauer published The Trouble with Computers which suggested that the productivity impact of computers was much more nuanced, and diminished, than was believed by some, then or now. Two industries--farming and manufacturing--saw initial boosts in productivity through computer automation, but once you have these gains, they don't necessarily compound. One non-farming, non-manufacturing industry also saw big gains--telecommunications. But all other industries saw much more modest (around 0) productivity gains. This general finding reflected the famous Solow Paradox of Robert Solow--that the computing revolution could be seen everywhere, except in the productivity statistics.

Towards the end of the nineties and the new millenium, though, the Solow Paradox appeared to be conquered. This might have reflected better training and management learning, two key criticisms found in Landauer's critique, and a reflection of 'human capital' views of technology and productivity (this remains a key pillar of Brynjolfsson's 'J-curve' argument, that new technologies initially reduce productivity, but following investments from firms, the potential of these technologies is unleashed, and productivity then increases significantly). Though, the economist Jeffrey Sachs also points out that this boost in US productivity (attributed to computers) also aligns with China's entrance onto the world trading stage, which a) significantly boosted commodity prices and b) significantly lowered the prices of finished and semi-finished goods. In other words, Sachs argues, the productivity growth seen in the West at the time may well have reflected the economic re-emergence of China much more than the sudden (and apparently lagged) productivity effects of computers. That everyone expected computers to boost productivity--following years of hype and investment--might be a better explanation for why computers, rather than China, were given such credit. Furthermore, from a management perspective, if input prices fall, boosting margins, you can take no credit for improved firm performance. But if computers boosted productivity, and you chose to buy computers for the firm, you get credit. This is to say, not only were expectations primed for computers to boost productivity, potentially undermining the role of China; firms were incentivised attribute gains to computers, too. (Note: I think Sachs' perspective is interesting, but I'm not willing to stake my soul on it. In part, because Sachs' makes this argument in passing as an example of Alan Greenspan's misunderstanding of the US economy in the late nineties.)

Fast-forward to 2014 and Daron Acemoglu and co. were arguing that the Solow Paradox had returned. Their analysis adds a further layer to the whole 'computer-productivity' question. Examining US manufacturing, computers do initially appear to have increased productivity in US manufacturing. However, the authors then account for the fact that US manufacturing has shrunk in the past few decades. Those firms which have survived--the highly productive ones--have in turn increased average productivity in the industry. But this is a statistical artefact, not the result of computers. Indeed, the investment in IT technologies by these firms may not reflect their productivity benefits, but rather that successful firms are more able to investment in expensive computers (whether they are productive or not). This is a critique Landauer also pointed out.

I should say that, for each of these critiques, there are people--often Erik Brynjolfsson--who report much more positive results about computers and productivity (though even Brynjolfsson does not deny the paradox in his J-curve hypothesis). As a headline, my reading of the literature is that the productivity impact of any technology is incredibly hard to measure, and has many more qualitative dimensions than are often acknowledged (particularly in less academic settings). This is something everyone should keep in mind in the 'AI productivity debate.' But I digress. Let's say that, actually, computers have not made us as productive as we inhabitants of a computerised world might be inclined to believe. Why do we hold this belief? Firstly, I suspect the 'just so-ness' of it all has a role to play. We all use computers, so computers must be worth using, right? (Incidentally, because computers are now so ubiquitous, while in theory they might not have made a non-computer-user more productive, their absence would certainly make a computer-user less productive, because our world is now so computer-mediated. This leads me to muse on a law of technology: any technology, no matter how useless, becomes indispensible when universally adopted. I wonder if this might apply to AI...)

Secondly, and more importantly, consider that the one non-farming, non-manufacturing area which Landauer found to have benefited substantially from computers was telecommunications. Computers have undoubtedly improved our abilities to communicate, both through a transition from analogue to digital messaging (greater accuracy), and through enabling instant messaging through the internet (speeding up communications). They have also helped in more subtle ways, too. For instance, it is faster for me to type on a computer than to write this sentence out by hand (though, it would not necessarily be faster for a typist using a typewriter, who is the actual worker to whom computer typing should be compared). Yet, while rapidity and accuracy in communications are important, the productivity of communication is also crucially influenced by decisions around what to communicate, and indeed, whether to communicate at all. These latter factors--particularly the latter latter factor--are not especially helped by computers. Indeed, they are likely to be substantially hindered. In the 1997 edition of Administrative Behavior, Herbert Simon argued that while much of human history was defined by information scarcity, computers now meant information abundance, which, conversely, meant information overload and new challenges to good decision-making. The apparent productivity benefits of computers in information management reflects the fact that intuitive measures of productivity in these areas are a) the amount of information handled; and b) the pace of information processing. For instance, in the computer productivity debate of old, that computers increased the speed of typists, and thus the number of drafts of a letter they could produce, was seen--according to Landauer--as evidence of productivity. But a typist is no more efficient if a computer just allows them to produce more drafts of the same letter without meaningfully improving the quality. Neither do they become more efficient by being able to send more letters, if those additional letters produce no meaningful benefit (and, if they were previously not sent, we should be sceptical that they are now somehow valuable to an organisation).

What is interesting about this line of thinking is that much AI, and generative AI specifically, is not really a 'general purpose' technology. AI is an information management technology. Generative AI is a tool for proliferating information. It is to the computer-age what the word processor was to the pre-computer age, and it suffers the same drawbacks. Readers will know I have proposed the term 'efficient inefficiency' to describe this kind of phenomenon, but that term is not exclusive to AI. A computer that allows a typist to write more letters which will not be read is efficient inefficiency, as is a generative AI writing reports no one will read. Of course, if sending letters is one's measure of productivity--as it often was for a typist--then computers appear to deliver tremendous productivity gains. In the same way, if lines of code is the measure of productivity--as it can be in programming--then generative AI coding copilots are great innovations. Never mind the letters are not read. Never mind the code does not work.

back .