.


What Will AI Do To Us?

Introduction
Several years ago, I tried to write a paper arguing that AI risked destroying knowledge and skills, and that such destruction was a substantial risk to our societies. I learnt quite a lot about what knowledge has become 'lost' in the past, and why (of course, examples of truly lost knowledge cannot be given because they are, well, lost). Still, I abandoned the paper in the end. Abandonment came because I decided it was not an especially original idea. As early as the 1830s Hodgskin had been arguing that the division of labour destroyed skills and debased labour, something which would remain a theme of political economy henceforth (e.g., Braverman, 1974). And around the turn of the millenium--or, at least, whenever GPS came onto the scene--people began describing a 'Google effect'--that the ease with which Google enabled people to access information would inevitably lead to a dumbing down of society, a destruction of skills (like navigation skills), and so on. To say that AI would lead to the loss of skills, of knowledge, and that this would create a more fragile society, was to simply place old wine in a new cask (elements of the abandoned paper would find their way into my Economics of Time Travel paper, and I suppose I may still return to the full paper one day...).

It is obviously important to ask: 'what will AI do to us?' (though, not for any AI-specific reason. As Heidegger tells us, technology is transformed by us, and transforms us. A pedant could write about the transformative power of the toaster, of the portable radio, of the flint-and-steel, and so on). The most acccurate answer is: who knows? But it is not especially big brained to suggest it will dumb us down, at least in some ways. Already, I have seen people using AI to create music. One example in particular was of a 'hardcore punk rock' song. Unfortunately, the resulting track was not a hardcore punk rock song, neither in its aesthetics--it sounded nothing like hardcore punk, just 'loud, aggressive rock music'--nor in its ethos--I cannot think of any hardcore punk who would entertain the idea that a text-to-music AI constituted the do-it-yourself, anti-establishment attitude of the genre (though, I guess, it might). Of course, the creator did not know this, and their ignorance of aesthetics in turn revealed their ignorance of ethos (it also is a nice example of my arguments about AI not hallucinating--the AI got the 'hardcore punk rock' song wrong, but the person behind the song did not actually know what 'hardcore punk rock' should sound like, so took it as an accurate, rather than hallucinatory, output). AI, in making the production of 'hardcore punk rock' so readily accessible, will forever hinder people learning what 'hardcore punk rock' actually is. This is an isolated example; one that is close to my heart. Given the array of areas that AI is being introduced into, each reader could find an example close to their heart if they looked. AI will make posers of us all.

Equally, I do not really care if AI is 'dumbing us down.' That phrasing, that perspective, is far too elitist for my liking. While I do not think the AI song was 'hardcore punk rock', who am I really to say? Similarly, am I really justified in claiming today's taxi drivers are 'dumb' for relying on a GPS for their work, or today's joiners and masons for relying on power tools? As above, one of the more remarkable things about the whole 'AI dumbing us down' argument is how unessential AI is to the argument. So, this is what this post is actually about. Not AI or technology, really, but what do we as humans mean when we say things like 'smart' or 'dumb', 'knowledge' or 'ignorance'? I would argue that having a good idea of what we mean by these terms, and when they are applicable, is essential to even begin tackling the question--to which I have already answered--of 'what will AI do to us'?

Smart and Dumb Thinking
One of my favourite parts of Thomas Kuhn's Structure of Scientific Revolutions is when he discusses the role of tools, and the development of scientific tools, within science. The broad argument is that new tools can reveal new phenomena which in turn can demonstrate flaws in established ideas. If a new instrument allows you to measure 5.1, but the leading theory says that the measurement should be 5, and only 5, then greater precision in the tool has created a problem for the old theory, and through this anomaly, created the conditions for new theories to emerge. And thus science progresses. Great! But... this doesn't really mean we're getting any smarter. With the large hadron collider we now know more about sub-atomic particles than could ever be imagined one hundred years ago--so what? AI technologies are offering new insights into protein research and medicine design--so what? A 'I fucking love science' type of person might point to the new technologies atomic physics will realise, or the medical benefits brought about through new drugs from AI protein analysis. And, I suppose, insofar as this extends the human race's command over nature (whatever that means), these are all 'good' things (for example, scientists knew of nuclear fusion processes before than knew of neutrons, but neutrons help fusion reactions occur, and so our knowledge of the latter (maybe) helps our knowledge of the former). But I'm not sure that makes us, as a species, smarter.

Something I tell my students is that I do not know more than them; we just know different things. The point here is to emphasise that 'smart' and 'dumb' depend on how these terms are defined. If intelligence is about how much stuff we each know, measured in bytes, then we are each essentially as intelligent as one another. We just know different things. Most people would reject this idea. Intelligence is about knowing useful or relevant information. My students consider me smart because I know things that they consider valuable or useful to know. Broadly, I would borrow a term from psychology and describe this as 'successful intelligence'--knowledge that leads to 'successful' outcomes in a given scenario (note that this can and should include things like emotional intelligence and social intelligence, two things I am desperately lacking, and which any apparent 'informational' intelligence cannot compensation for). Of course, this brings in a degree of relativism that some people will not like. Teaching my chosen subject, I will be successfully intelligent. Doing pretty much anything else... not so much. (Incidently, that AI advocates champion superintelligence or general intelligence as a machine that can do everything reveals the detestation of a relativistic understanding of intelligence. The frequent reverence for IQ tests and other examinations by this same group of proselytisers also reveals the deeply held belief that intelligence is an objective phenomenon, rather than something which is a) relativistic; and b) socially determined). (Also, incidentally, criticisms of such advocacy are readily available, and have been for some time. The continued ignorance of various AI proselytisers, the continued reverence for things like IQ, and so on, reveals, ironically, the ignorance of these boosters, rather than their intelligence. I have written somewhere previously something like 'the person who cares about IQ is the dumbest person in the room' and I stand by that)

There are also at least two social dimensions to intelligence. Firstly, we might understand technology as making us 'smarter' by expanding the capabilities of our species. It is not so much that a new medicine makes any one of us smarter; rather, it is that, collectively speaking, 'our' knowledge of this new drug expands 'our' collective capabilities to deal with ill health, etc. This is close to the 'collective brain' perspective of some cultural researchers. I am somewhat amenable to this perspective, but only insofar as one recognises that intelligence must become divorced from the individual, and that 'getting smarter' instead means we as a species (or as a society) becoming more intensely and complexly interdependent upon one another (this is quite an Illichian perspective, which will probably weave throughout this post. Illich's argument is not that interdependence is bad, but that technologies can force us into states of interdependence, which can become states of domination and unfreedom. I am quite happy to rely on someone else for my food and they to rely on me for, like, shitty commentary, but in my society, I don't really have the choice--I am forced into a state of interdependence). I think this is the perspective most people should adopt (though I'm not sure I do).

Secondly, that what is 'smart' or 'dumb' is so often about social status, culture, and other forms of socialisation. I have a PhD, so people think I am smart. I have heard many people who do not have degrees describe themselves as being 'dumb' or 'thick' or other things which they are not. This is one of the more odious formsof 'intelligence'; the form that reinforces unhelpful barriers to human flourishing (it is the most damaging everyday delusion about intelligence, while obsession over IQ and 'objective' or 'natural' intelligence, obviously, has more long-term detestable and destructive consequences). It is also, unfortunately, the grand illusion behind AI's 'intelligence'. There is nothing especially impressive with AI passing an exam, except that people think that passing an exam is a sign of intelligence. Many of my students struggle to understand why AI generated writing scores badly when I mark it, because--from their perspective--the aesthetic details of AI writing, like spelling and grammar, are perfect, and those traditional markers of 'good' writing. One interesting example here is language, and particularly, language which arises out of artistic expression and the blurring of cultural experiences. Most scholars of language recognise that written and spoken languages are constantly evolving and changing, that wordplay and slang are valid components of language, and so on. Yet, such recognition does little to withstand social pressures that suggest there are 'proper' ways to speak, to write, and so on (e.g., see Marine Le Pen accusing Malian singer Aya Nakamura of not actually speaking French...despite speaking French). I will probably come back to language, shortly. (Note: this is why I can and should be criticised for my above comment about 'hardcore punk rock'. Who am I, or anyone else, to say?)

Let us take stock before continuing. I do not think AI will make us dumber or smarter insofar as those terms are understood as 'the amount of stuff each of us knows.' In any age, equipped with any technology, we all know the same amount of stuff; we will just know different things. Intelligence, understood as 'successful intelligence', has a relativistic component. One may be 'smart' in contexts where they possess useful knowledge (i.e., 'expertise'), and 'dumb' in contexts where nothing they know is especially useful (i.e., 'ignorance'). Neither of these perspectives meshes especially well with popular notions of intelligence being absolute--you are smart, I am dumb, etc. Introducing some social aspects to the discussion may help, and also hinder. Technology can make us, collectively, smarter if this is understood in terms of our collective's capabilities--if technology expands the set of possible things we as a species or society can do. This is probably the most sensible understanding of intelligence, and technology's role in shaping intelligence, though it again conflicts with ideas of intelligence being an absolute quality of individuals. Equally, what we think of as being 'smart' and 'dumb' is often influenced much more by abritrary qualities we are raised to take as authorities. Your qualifications or my lack thereof do not make either of us 'smart' or 'dumb' per se, but someone with a PhD will always command more authority over the 'smart person' label than someone without one.

What Will AI Do To Us?
So let us now answer the titular question beyond the somewhat flippant response of 'who knows?' Because, we do kind of know. It is not so much that AI or any technology will make us smarter or dumber. As above, I do not necessarily believe scientific progress makes us any smarter or dumber--at best, 'progress' is merely an expansion of capabilities, not knowledge. Even this, I would suggest, is disputable, because of the social dimension of things. The car offers some capabilities which did not exist prior to its invention; but it also constrains the capabilities of those who do not have access to a car, for whatever reason, or who do not want access to a car (this is the unfreedom Illich means when he talks about technology making us forcefully interdependent on one another). Thus, there is no way of saying whether the car has made us smarter or dumber, more capable or less, better or worse. There is, as Polanyi notes, the question of trade-offs between where we are and where we could go. Do we, as a collective, want to live in a world dominated by cars, or do we want to live in a different world, with a different assortment of capabilities? As with objective notions of intelligence, the idea that 'progress' is objective and absolute has the social effect of beating the collective into the vision of whoever can decide what progress means. This is why ideas, as much as violence, are important within society. (Note: I carry this intellectual outlook over into the study of economics and wellbeing in general. When someone like Hayek argues that the free market is best because it leads to the most progress, I always ask myself: but what do you mean by progress? How are you measuring it? There is no quantity called 'progress' which we can measure, and from the accumulation of which we all benefit. The 'new optimist' movement most commonly associated with people like Steven Pinker have tried and will testify to measures such as absolute poverty or life expectancy being 'objective' signifiers of progress, which is fine, I guess, but not something I can often claim to have a substantial interest in. I am happy being a relativist.)

A major challenge we must face as AI becomes more prevalent is the risk of AI standardising 'what it means to be smart', of the technology coming to dominate the social signifiers of 'smart' and 'dumb' and so on. And, to a lesser extent, of forcing homogeneity on diverse societies and becoming a functional tool of intellectual domination (to an extent, this idea could be linked to the case of 'economics imperialism' which some economists have championed--that their 'better' methods should lead economics to dominate many fields of study). As Illich notes in Shadow Work, we have seen such phenomena before in the printing press. Contrary to the popular narrative that the printing press was a democratising tool resisted by the European Church and monarchies of the continent, Illich argues that the technology functionally empowered these groups to further dominate ordinary people. For instance, the printing press allowed for the standardisation of language and the creation of inelastic bureaucracies, which in turn allowed those in control of the technology to decide what 'proper' language was, what 'good' management and administration was, and so on. Rather than ordinary people communicating in dialects, developing their own slang, expressing phenomena in words and utterances and facial expressions which mattered to them; this freedom was labelled as 'improper' and aggressively banished (in the banal way that bureaucracy does) as those monarchs who would soon build territorial empires around the world embarked on a project of cultural nation-building at home. Rather than organisation and management remaining a local matter, calling upon local people and institutions (for good or for bad), and demanding the exercise of local initiative and experimentation, the printing press enabled the king's laws to carry much further than the distance within which he could be heard.

The point here is not to suggest that the printing press was good or bad--as above, technology prompts a social debate about trade-offs, not an 'objective' debate about 'progress.' The point is to recognise that the introduction of the printing press created means of intellectual domination not because more writing could be produced, but because this capability gave a minority the power to standardise what 'good' and 'bad' writing (and thus language) was (a curious hypothesis to consider is that China, which developed a form of standardised printing several centuries before Gutenberg would introduce the technology in Europe, is famous for having established an enornmous, standardised bureaucracy, access to which was governed by a standardised exam based on memorising various key texts. To what extent was China's technological development of earlier printing technologies contribute to the evolution and proliferation of the Chinese Imperial state?). This broad idea is not lacking in contemporary relevance. Evgeny Morozov, in The Net Delusion has provocatively argued that the Internet, rather than being a liberal force for democratisation, has instead been a tool leveraged by those already possessing tremendous power to reinforce their power. In particular, through the control and engineering of culture, identity, and so on. In a recent working paper with Henrik Saetra, I have argued similar--that AI technologies, rather than fostering greater inclusivity and representation, may operate a tools for fabricating such virtues, and in turn, eroding them to the benefit of those who already have power. I think this is the sentiment which was behind a recent post of mine about the perils of generative AI images, too.

I worry that for all capabilities that AI might unleash--all the ways it might make us smarter and dumber, whatever those terms mean--the most notable impact of AI will be in how it shapes our imaginations about what is 'smart' and what is 'dumb'; and so on. In effect, that AI becomes a force for imposing false objective standards onto our language, our culture, and our identities. And that those standards, far from being determined in a democratic way, or even in an organic way, are instead being determined by those who have the real power over the design and deployment of these technologies. I think this is the essence of the terror that researchers such as Gebru and Torres intimate when they write about the ideology of Silicon Valley, and of the modern technology ecosystem. (Note: for avoidance of doubt, I want technology to unlock human potential. But while TESCREALism supposes that human's lack capabilities and need technologies to enhance them, I believe that humans possess capabilities which our political-economic system must constrain for the benefit of a few. I want to see technologies as a liberatory tool; technologies which undermine power structures. But I will not fall into the delusion, as Morozov notes, that technologies are necessarily liberatory.) It is, in part, the essence of the terror that Feyerabend tried to capture when, in the 1970s, he wrote of the threat that 'objective' Science (capital S) posed to our free society by demanding ever greater power and resources, to the determinent of the esoteric and peculiar of the world.

AI, above all else, will become a tool for social standardisation. Its advocates will proclaim that the technology is making us smarter, transforming education, unlocking producvitity gains and scientific insights. And so on. And maybe these things will be so. But will there be a forum to debate not just these 'facts', but their social trappings? Probably not. Like the printing press, AI is likely to just be a tool for powerful people to decide what 'proper' language is; what 'good' art is; and what 'intelligence' means. What is worse, because so many in our society already internalise false standards about what they can do and how good they are, it is quite likely that the use of AI to reinforce these standards will not be challenged.

back .