.


Generative AI is Not Punk Rock

Introduction
A few weeks ago, I saw an article being shared on social media by some university's social media team. It was about the First World War. I did not read the article, and the article's content (I assume) is not relevant to this post. What caught my eye about this post was the decision--presumably by the social media team--to use an AI generated image as the heading image. The image was of soldiers in a trench, with red poppies interspersed throughout the scene, and the characteristic AI 'smoothness' over every surface. If I looked harder, perhaps I could have seen peculiarities in facial expressions, in the contours of the trenchline, in the number of fingers clasped around rifles. But I was more taken by the total redundancy of the image. I could perhaps understand, in some instances, using an AI generated image to add a visual component to a story about, say, the neolithic era. But we have photographs of the First World War. What was the editorial logic, I wondered to myself, of using AI generation here?

The more I thought about it, the more the decision seemed questionable. The photographs of the trenches depict real people, who really did suffer the horrors of war. They depict real landscapes, that really did see death and destruction and had to be reassembled (in a manner of speaking). They depict animals--horses, pets, livestock--that were subject to the same brutalities as those men were, and Nature was. There are particular images which sing with the same sorrow as the poetry which emerged from the trenches; images of thousand-yard stares, of coy smiles which reveal the human mind struggling to comprehend the madness and horror of the moment, and for too many, of the rest of their lives. These images are powerful--they mean something beyond what they show.

I am not so inclined as to suggest an AI image cannot ever take on these qualities. But I would suggest, in an instance like this, there is no competition. The AI image would be a cheap (socially speaking), disrespectful alternative to actual photographs. I think this perspective applies to other instances. Yes, there were no photographs from the neolithic era, but we still have artifacts and other such evidence of that era. One of the joys of visiting mueseums is to stand within inches of items and artifacts that are hundreds, if not thousands, of years old. To exist in a moment of temporal unity, as one realises whole universes existed before one was born, and that the same will be true after one is dead. A photograph of these artifacts cannot, in my opinion, supplement for the real thing. But it can, I think, act as a bridge to the real thing. I wonder how much the same can be said of an AI image.

I do not like AI images, either aesthetically, or politically. I am aware this is not a unique position. There are many smarter and more creative people than me articulating powerful arguments against AI imagery. These often revolve around questions of artistic theft, and the devaluation of art which generative AI technologies can (and in some instances, I think will) bring. I'm not necessarily interested in getting involved in this conversation, not because it is not important, but because I do not feel sufficiently equipped to really comment on it. I am also not, necessarily, opposed to AI imagery--provided questions of origination can be resolved--when these technologies are used in ways that enhance or extend the human experience, rather than merely trying to automate it. To this end, again, I do not feel qualified to write extensively. Though, I would recommend Eryk Salvaggio's 'Flowers Blooming Backward Into Noise', which is an impressive and compelling piece of criticism on these matters.

The perspective I want to focus on here could begin with this idea that images can show more than what is actually in the image. Informationally, this 'extra meaning' which can be found in some images is tacit information--information which cannot be transformed into data. From this technical perspective, of course AI images cannot readily include tacit meaning--it is not in the dataset! Yet, as above, this does not prevent people from giving meaning to images, and so does not mean that an AI image cannot, on an individual-level, convey more meaning than is shown in the image. For instance, there is a (supposedly) AI generated video on YouTube which I quite enjoy. It was said to have been generated using Pink Floyd's 'Echoes' as an input (whether the lyrics were the input, or the audio, I do not know, though I suspect the former based on some of the outputs), and if true, I think the whole audio-visual experience is quite interesting. In a recent post, I have suggested it is more helpful to think about why humans are inclined to assign meaning and intelligence to other entities (including AI), rather than whether something like AI is objectively intelligent, and this perspective falls into that same thought bucket.

As one cannot foreclose the possibility that someone does have a meaningful experience with an AI image, it is not helpful to simply say that AI images are bad because they get in the way of human images (and other media) which are, for some reason, full of extra meaning and therefore important. We're getting into 'death of the author' territory here, and that's not somewhere I especially want to go. Instead, I want to go one abstraction further--what determines whether a piece of media is 'good', insofar as we think it should exist? Answering this question, I think, cracks the whole AI image debate wide open. Indeed, I think it helps us understand generative AI in general. It offers recourse to why someone, somewhere, felt compelled to build a computer to make images out of text. Why millions of people are enthused when such technology is at their fingertips. Why editors of a First World War blogpost decided to use an AI generated image over a photograph of the actual event. It perhaps offers recourse into ourselves, too--why use an AI image, when other media show more than what they contain? What makes us elevate contents over the meaning behind them?

The Problem with 'Good' and 'Bad' Art
I can say for certain that much of what I am going to argue is derivative of Illich's ideas. Most can probably be found in 'Deschooling Society', off the top of my head (for being such an Illich fan, I am terrible at managing my notes on him). Yet, inevitably, some of the ideas I will discuss are also taken from Feyerabend. Probably from 'Science in a Free Society'. As I may have written previously, these two writers share so much--to an almost creepy extent--hence why I regard it as inevitable that I may (am) muddling their various contributions. But that is a discussion for another day.

Illich wrote 'Deschooling Society' to criticse the modern education system, which he considered to be having negative effects on human learning. School, for Illich, was an institution designed to produce its own demand. Schools teach us how to be in society, which inevitably means being educated, thus perpetuating their own existance. Furthermore--and most importantly for us here--schools are said to have a monopoly on education. While we can learn in many ways, and through doing many different things, we can only receive an education by interacting with--and sublimating ourselves to--'official', 'certified' educational facilities. I do not wholly agree with Illich's perspective, here--I think he overly ignores how the education system is, under capitalism, a system for producing workers for economic production, rather than producing consumers for economic consumption. But I digress. The nub of the argument, as it matters here, is that there are many ways in which we can all learn, and can all acquire skills, knowledge, means of expression, and so on. But there are institutions (which I am going to use very broadly) that exist to apply normative constraints to those activities. Put too simply, schools--for Illich--tell you want is 'good' and what is 'bad' learning, and inevitably, the learning one undertakes in school is 'good'.

(Note: Illich's ideas about institutions are fascinating, and have somewhat burrowed into my brain. For other examples, we might look to the criminal justice system (e.g., policing is an institution in the business of producing criminals) or highways (which are 'in the business' of producing traffic). Again, I must write on this at some point)

As best as I can mentally untangle things, Feyerabend fits in with this critique of credentialism (e.g., that there is a 'right' way and a 'wrong' way to learn, and that one must be certified to be said to 'know something'). Feyerabend's earlier work in 'Against Method' argues that the notions of objectivity and rationality in the scientific method--indeed, the notion that the method itself even exists--cannot be justified. He spends much of 'Against Method' showing how scientists like Galileo used 'leaps of faith', 'propaganda', and other 'non-objective' strategies to develop and desseminate their ideas (incidently, I do not think anything has really changed, at least on the dessemination front. See, for instance, Kingdon's book on the politics of policy adoption, and how evidence is a lot less important than 'scientists' seem to think). Evidently, Feyerabend did not think he'd done a good enough job, as much of his 'Farewell to Reason' focuses on another scientist--Einstein--using similar (and as Feyerabend would have it, non-objective) methods. But, anyway, 'Science in a Free Society'. Science is a Free society is a bit of a manifesto. Feyerabend argues that scientists shouldn't be given the primacy in modern society that they are given. Science, to Feyerabend, is hardly infallible, and--per his previous critiques--is rarely as golden and objective as it holds itself to be. For Feyerabend, the notion that science and scientists can divine 'objective truths' is questionable. And, insofar as they can, that these 'truths' should not necessarily crowd out alternative, 'non-scientific', 'folk', 'subjective', 'truths' is difficult to justify.

(Note: Feyerabend argues that scientists, like everyone else, should have an equal say in the progression of society, through democratic means, and should not be given an elevated mandate to influence policy or politics without prior democratic consent. Even then, alternative perspectives--magic, shamanism, nihlism, whatever--should also be afforded the same resources as science is given. In this sense, Feyerabend is not 'anti-science' (though we might say he is anti-Science, capital S), but against the notion that non-scientific pursuits cannot also contribute worthwhile insights to society. I agree with this--I find myself reading someone like Robert Sapolsky, loving the scientific insight, the cool results, the obvious enthusiasm which comes from this guy, and thinking 'Yes, yes, you are someone I would listen to'. But I do not think we should only listen to Sapolsky (neither, I suspect, would Sapolsky), but Feyerabend's concern is that that is what science pushes the rest of us to do. My personal gripe with the 'I fucking love science' crowd is not that they like science per se, but that they treat science like some people treat certain religious texts.)

Putting these two ideas together, we get a critique which questions whether institutions have the legitimacy to apply normative standards on the world, with the lack of legitimacy coming from the argument that 'objectivity' is more a vibe, more a tribal coating, than something which can actually be achieved. This is all big stuff, with big implications, and I am meant to be writing about generative AI. So, where does it into things?

I think it fits in like this. There is no such thing as 'good' or 'bad' art. There is no such thing as 'good' or 'bad' music, as 'representative' or 'non-representative' media. There exists no 'objective' way of determining the goodness of badness of a cultural, if not human creation like a piece of art, or any other image. Some of us find corporate logos grotesque; some of us find them beautiful; some of us find them beautiful in the sense that they are grotesque when placed in the cityscapes of Bladerunner (grotesque, after all, means something that is beautiful and disgusting at the same time). The Mona Lisa is worth a lot of money--it is an OK painting. Starry Night is also worth a lot of money (probably less than the Mona Lisa)--it is a stunning painting. Or maybe it isn't. Beyond art--the Trabant and the Reliant Rocket were bad cars, insofar as they broke down, did not look cool, and so on. But they were also great cars, insofar as they broke down, and did not look cool. It just depends who you ask.

It is hardly a big revelation to say that 'creativity is subjective' (with the word 'creativity' acting as a placeholder for a wide array of things). We all know this. But this being so, let's go back to Illich. If the things we create are always subjective, and if the notion of 'good' creative outputs and 'bad' creative outputs are silly, why do so few of us share our crappy drawings online? Why do so few of us paint, or choreograph dance routines, or perform our original compositions of music (perhaps using instruments we built ourselves)? Why is it that jobs like 'graphic designer' exist, rather than an entrepreneur simply designing their company logo or website themselves? Why do whole industries of 'professional' creatives exist, supported by various technologies which allow for the 'professionalisation' of creativity, of which generative AI is now one?

I think it is because, despite us all knowing that 'creativity is subjective', very few of us believe that statement, insofar as we act on it. Very few of us are OK saying: this is something I have made, and I believe it is good because I have made it, and if you like it that is great, but if you don't like it, that is perfectly fine, too (as a musician who does not play for people, never mind playing the songs I write, I sympathise with this struggle quite a lot). This is because, from a very early age, we are 'institutionalised' to see things through various norms that split the world into normative categories. One could say we internalise the normativity of our society. I see this more and more as I grow older. I will often speak to people who say they are not as smart as me because I know some inane piece of information. But then, they might be able to tell me the exact scoreline of a football match played in 1971. This is a comparable ability--but I am an academic, and so 'smart', whereas many people I know are not academics, and so they're not 'professionally' smart, so to speak (Note: this is also one of many reasons why I hate the IQ, and discussing IQ--why do we have to measure intelligence, why can't we simply admire and respect and appreciate everyone's various, if perhaps nebulous, abilities?). The same is true of all kinds of things. Ask a child if they would like to draw a picture, and few will say no on the basis that they 'cannot draw'. Drawing is an act of picking up a pencil and moving it across a piece of paper--that is it. Provided one's material abilities to be creative are satisfied (a discussion for another day), everyone can draw. Our ability to 'not draw' is a learnt behaviour. It is the inevitable result of us learning what 'good' and 'bad' drawing is, typically through things like feedback (e.g., grades) and failures to meet institutional expectations (e.g., producing a self-portrait that doesn't look like you).

Generative AI is not Punk Rock
I am not anti-school, and indeed, even Illich softened on his position on school. I am more anti-norms. The world would be much more interesting, and I believe people much happier, if we were all more inclined to reject norms around 'good' and 'bad' creativity. Of course, we'd end up with a lot of cultural artifacts that we each think are trash--but we'd also all get new albums, paintings, novels, movies, and so on, which we each individually would adore. Set aside the consumer angle for a second. By anti-norm, I essentially mean embracing self-actualisation. As the saying goes, let your freak flag fly. Write the book that you want to write--it will typically be the act of writing itself, rather than the reception of others, which is really what you enjoy. Or, write that album who have always dreamed about--it was only when I stopped trying to impress others with my music that I started to really enjoy music, and incidentally, started to write better stuff. And so on.

Illich and Feyerabend are typically associated with anarchism. Well, Illich never declared himself an anarchist (to my knowledge), and Feyerabend was an anarchist only in the sense that he believed in an anarchy of methods for exploring reality, rather than an anarchist political system. I am not sure where I sit on the anarchism spectrum, but if I could make a contribution, I would liken both thinkers to punk philosophy. I define my punk philosophy around Mark Fisher's notion that 'punk is acting without authority'. This is essentially what I mean by anti-norm. If we all know that art is subjective, why succumb to some normative standard of 'good' or 'bad' art. If making art is what you want to do--do it. Act without authority. By extension, this punk philosophy does not necessarily mean rejecting technologies like generative AI. Again, matters of origination aside, generative AI may lead to all kinds of expressive art in the hands of empowered individuals, in the same way that the camera or the drum machine have. It is these projects I look forward to.

But what we should reject is the imposition of generative AI; of the super-imposing of generative AI on human creativity. As above, I suggested that if we can explain what shapes people's perceptions of 'good' and 'bad' creativity, we can gain insights into generative AI. My answer is that we are institutionalised, internalising norms around goodness and badness, so that we learn that we are 'bad' at writing, that we 'cannot draw' and so on. It is in this environment that generative AI comes along as our savour--can't produce cultural products to these arbitrary standards that you've internalised and have been reinforced all your life? Don't worry, we've automated that so now you can! From this perspective, generative AI is not a 'liberatory' (Illich might say 'convivial') technology, but one that reinforces the norms which hold each of us back. So long as there are arbitrary norms that manipulate us into thinking less of ourselves, there will be technologies like generative AI which emerge to plug the gap in our self-esteem. And, in turn, I would suggest it pushes us to annihilate each other's contribution to creative works--if I believe generative AI is 'good' because I myself am a 'bad' artist, I may be predisposed to overlook the stuff that an image shows in excess of what it contains. It may undermine my desire to question, to reflect upon, why an image exists, what that images existence could mean, and if that images elevates an experience in a given context.

As a final remark, when I say I am anti-norm, I do not necessarily mean that there should be no rules, that we should each as as a force und to ourselves. The normative idea that murder is bad is not one I would see us readily abandon--I would not dismantle the institutions that internalise the badness of murder in its citizenry. This is because murder is harmful, and the norm 'do not kill' ultimately aids society. Inevitably, there will be norms which are more ambiguous (e.g., do not take performance enhancing drugs). But some norms, I would argue, should not be opposed given the obvious benefits they bring to society (these are often codified into law, though again, I do not mean to suggest that all laws are necessarily 'good'). The norms I am 'anti' in this instance are not of these sort. It does not benefit society to teach people that they 'cannot draw' or 'cannot write' or, for that matter, that they 'are not smart' or they 'cannot do maths'. In many instances--such as personal expression of identity, sexuality, creativity, and so on--enforcing norms of 'goodness' and 'badness' does substantially more harm than good.

It is at this point that I am aware of the massive stream of thought I am tapping into. I have again ventured into realms where other people know substantially more than I do. So I will stop here, and summarise my main points as thus. Generative AI meets a certain level of demand that only exists because we have internalised false norms about what 'good' creativity is. Rejecting these norms will be better for all of us--we will be happier, and have a more vibrant, creative world. Embracing generative AI without confronting why we demand these technologies is likley to only lead us to entrench these constraining norms, leading more of us to learn that we 'cannot draw'.

back .