The Human Premium
Introduction
Few academics realise that they are in the content creation game, and few content creators realise that they are in the landlord game. Let's look at a clear example. If I write a book, I earn royalties from the sale of that book. Whether it sells well or not, I now own an asset--the copyright--from which I receive income without further improving it. If I make a YouTube video, I now have a claim on a share of the advertising revenue that is generated from that video, whether it is the latest thing I have made, or something I made ten years ago. The same is true of music, of movies, books, videogames, and so on. The production of content is the production of intellectual property from which rents can be generated. And so, despite the academic tendency to see their work as a volition, as a service to human kind (etc.), most plainly, academics are content creators, and content creaters are landlords.
But content creators (academics included) have multiple rent-based revenue streams. Beyond simply owning IP, they also rent themselves out to others. Now, all workers do to an extent, through the sale of labour power. But I do not mean selling labour power; I mean selling the 'idea' of themselves--their prestige and reputation. Academically speaking, there is definitely an academic-industrial complex for this stuff. An industry wants to say that X is good, or bad; an academic (whether they believe it or not) receives money to say X is good or bad; the industry then launders their view of X by citing the academic view, and highlighting the reputation and prestige of the cited academic. I do not mean to say this to suggest that all academic advocacy is cynical; that there is a grand conspiracy. I mean it in the way Noam Chomsky described media bias to Andrew Marr in that infamous interview: more often than not, it is not that academics disbelieve what they are saying, but rather, that big business would not be listening to them if they did not already believe whatever big business thinks.
In many ways, content creators generate most of their rents through parasociality. A consumer likes a piece of content that a creator produces because they personally like the creator. Thus, they are more attracted to another piece of content that they might not otherwise be interested in; or, they come to be interested in whatever it is that the content creator advocates for (be that video games or political opinions). In this sense, the content creator does not generate a rent from their content so much as they do from the parasocial connection that their content enables them to make (just as an academic earns prestige from publishing books, and thus both the book and the prestige can generate rents for the academic). In the study of platforms, this could be likened to a 'qualitative network effect'.
The grubby game of rent-seeking in content creation or academia isn't the main focus on this poist, though it features in the backdrop. These dynamics are essential, in my opinion, to understanding how AI is likely to change the dynamics of content creation henceforth. (As an aside, I think most AI content creation is bad, or more specifically, cringe. While I am certainly not a taste-maker or in touch with the hip happenings of the culture today, I do believe that one of the fastest ways to tell someone has no taste is to gauge their opinions on AI generated content. It is basically 2024's take on 2021's NFT litmus taste.) Something I spend a lot of time thinking about, though have come to no solid conclusions on yet, is what happens after AI content leads to tremendous overproduction? In economics, the invention of a new machine which leads to substantial decreases in the cost of producing a good typically does not end well for workers involved in that industry, or even for that industry itself. Overproduction demands consolidation to return supply back to a profitable level. This does not bode well for content creators today, who face a dual threat. Firstly, the threat of automation of their labours by AI products. But, more importantly, a second threat: the threat of being drowned out by a torrent of cheap, never-ending AI generated content. (Something I have been thinking about for a while is the idea of a never-ending soap opera. If AI can generate content faster than a person can consume it, which it probably can, there is no reason that a company like Netflix might not, one day, launch a soap opera which literally never ends. Simply, that by the time one is finished with one episode, the next has already been generated.) Both threats erode the value of content creation. In this post, I want to focus on the second.
Search Costs
For all the talk of AI changing the world, the proliferation of AI generated shit is likely to lead to an explosion of search costs. Search costs are the costs incurred when one is trying to find something (obviously). Typically, these 'costs' will be non-pecuniary. For instance, a long menu in a restaurant might take time to browse before a person finds something to order, during which time the waiter is presumably growing impatient and the diner is growing hungry. In some of my work on deceptive choice architecture, we focus a lot on the time and stress costs of navigating websites and online services, and so on. One of the major advantages of things like recommendation algorithms is that they reduce search costs. By recommending something you'll probably like, say a video, the algorithm cuts down the amount of time you spend searching the website for something entertaining to watch. Similarly, the search bar is another search-cost-busting piece of online design.
It is genuinely worthwhile to consider whether our current devices for navigating online content can cope with AI generated materials. I am not sure we yet know an answer to this. On the one hand, of course recommendation algorithms and other such tools can. The whole point of a recommendation algorithm is not to find the best piece of content for you, but an adequate (a 'satisficing') piece of content. As AI proliferates content, it should be easier to find adequate content because there will be so much more of it. On the other hand; a) it seems likely that the percentage of inadequate content will grow faster than the percentage of adequate content; and b) it seems likely that AI content generation, just like human content generation, will become enmeshed in a game of chasing the algorithm, leading to weird and not wholly desirable 'local optima.' This is the same phenomenon which has brought us gormless YouTuber thumbnails and several million superhero movies. We might be able to identify it in the increasingly bizarre AI content for Facebook, which seems to emerge from different content farms simply reproducing each other's successful prompts uncritically.
I do not know if that is something to look forward to or not. I am sometimes accused of being a techno-pessimist, but I am more of a techno-apathetist. The future will be whatever it is, and in some ways it will be good, and in other ways it will be bad, and we will all enjoy some good and some bad. I am not beyond believing that a creative person, equipped with generative AI, could create something wonderful. I am also quite sure that these technologies will be used for socially damaging ends. I could say that is the nature of technology, but it is much more the nature of human beings, technology be damned. Thus, I want one to approach the problem of search costs in a world of AI content proliferation not from the perspective of 'is this content good or bad?' but from the perspective of 'how can I determine whether I will like/trust/care for this piece of content?' And it is here that I think humans--or at least some humans--have a special role to play.
Parasociality and the Human Premium
Recommendation algorithms help us reduce search costs. But so too do social skills. For instance, we can often decide who we will vote for without knowing anything about a candidate's policy platform--what I have heard previously called 'Simon's Paradox'. We form group relationships very easily, sometimes around the most silly of things. For good or for ill, these group identities can help us simplify and 'survive' choices which would otherwise be very taxing and difficult to rationally navigate. This aspect of our bounded rationality has been criticised. The ease with which we form group identities, for instance, can--and has--led to harmful in-group/out-group dynamics. It is likely a contributing factor to behaviours such as racism, sexism, and so on. Some, such as Paul Dolan, have taken this critique of human extrapolation from commonality a bit further. He has tried to coin the term 'beliefism'--the idea that we discriminate against people based on our interpretation of a few of their expressed beliefs. I am not especially convinced by Dolan's arguments (I told him as much in my notes on the draft of his book, which I sent to him after he sent the draft to me). While I do not believe in aggressive exclusion based on a disagreement over, say, how the tax system should work, I also think a) there may be reasonable correlates between a range of views and extrapolating from one view to several others is not necessarily flawed; and b) often we do not even know, fully or coherently, what we ourselves think and feel and believe, and so there is no means other than by extrapolation from limited information and experience for others to form opinions about us, and us about them.
For all the potential harm or bias or whatever that our social skills, group identities, and capacities to extrapolate from little information create, these are also vital cognitive mechanisms for surviving in a world much too complicated for fully rational computation. This is the essence of bounded rationality. An extrapolation from limited information helps us, in a manner analogous to a recommendation algorithm or a search bar, to find outcomes which we consider adequate. We do it all the time. People watch YouTube content by creators that they enjoy, trust, identify with, and so on. A recent Pew Research report suggests one in five Americans receive news primarily from their favourite influencers. Qualifications are another example. My ability to influence people on certain topics is greatly exaggerated by the level of qualification that I hold, and the institutional associations which also adorn my CV.
Hence the title of this post, The Human Premium. I think it is likely as that AI proliferates content through driving the cost of production close to zero, people will become increasingly reliant on existing (though flawed) social signals of quality to navigate the torrent. Readers of this blog will know that I am not necessarily positive about the role social labels play in our society. For instance, my having a PhD should not mean that others assume they know less than me, are intellectually inferior to me, and so on. I am, and will remain, an idiot. Nevertheless, I do recognise--as above--that we are predisposed to latch onto and extrapolate from such labels, and if we abandoned our current standards of intellect, culture, taste (etc.), new ones would emerge just as arbitrarily (and new stories would emerge to rationalise away the arbitrariness). So, I do not want to spend the rest of this post ranting about how we should give less stock to social signals of success and embrace a more punk, do-it-yourself mindset. I have written enough about that.
Instead, I want to comment on inequality. One of the weirdest things about having a PhD is the sudden realisation that one possesses a valuable, social label (smart person--note, it probably helps that I have a pretentious blog). It grants a certain degree of influence that I am personally uncomfortable with, though I know others seem to enjoy. A critical mind should ask why do some people receive or acquire these things which signal social status, and why do others lack them? Obviously, this is a big question, and the answer is: something to do with inequality. I will let more informed people fill in the gap here. The point is that the social signals which give a person influence are not equally distributed. Often, by virtue of possessing such influence, one is already doing quite well for oneself (though I recognise there are plenty of influencers out there who just kind of struck it lucky). And if such signals and the corresponding influence is likely to increase in value in an era of AI, such inequalities are likely to increase.
I recognise that a worthwhile argument against mine is that actually, AI is likely to make knowledge, content creation, and other such things more accessible, and thus lower the value of those same social signals which you, the author, claim to dislike. What value will a PhD provide when everyone has access to a superintelligent AI which can generate a PhD thesis in seconds (I am not convinced we will ever get here, but I'm playing Devil's advocate)? And, it is true that this kind of argumentation is doing the rounds. I agree that there is maybe an argument that AI 'democratises' certain parts of life which have formerly been closed off to those lacking qualifications or video editing skills or musical instrument-playing abilities. For instance, I think Petr Specian's work on how AI could level the knowledge playing field in terms of democracy is interesting (and one Henrik Saetra and I have built upon a bit in terms of representative decision-making). But there is a fallacy in this argument. The value of these 'social signals' or 'credentials' or claims to 'taste' is not that there is actually something special about those who hold them. The 'democratising' power of AI, insofar as it may allow us all to become doctors or to start successful YouTube chanenels, is only that it lowers artificial costs. It does not erode the social value of these social signals themselves. This is because, as above, we use claims to status and so on as tools for navigating the world (and not merely navigating information, but navigating social situations, learning, as it were, our place in the pecking order). AI may cause many existing signals--like higher education qualifications--to be less important, but it will not dislodge the human desire for such signals. Accepting this, because doing so is interesting, we can consider a few different scenarios.
Three Scenarios
The first is that new signals will emerge to guide us through the world. This is not necessarily the result of AI, but a tendency which AI may accelerate. A major disruption which our societies have been grappling with for a while is that of social media. For some, social media has been a 'democratising voice', allowing alternative views to gain a platform which mainstream institutions have previously excluded--the 'do your own research' crowd. For others, this is where misinformation and disinformation crises come from. For the purposes of this discussion, I largely do not care whether this 'Tik Tok' scenario is good or bad. Instead, I want to emphasise the arbitrariness of it all. While (I would argue) old signals are also quite arbitrary, the barriers to acquiring substantial social influence (e.g., getting a PhD, growing a large YouTube channel, getting a book deal or newspaper column) meant that the force of this arbitrary allocation was moderated somewhat. Social media has created a rupture where older standards of influence are breaking, and AI is likely to accelerate this. Under the 'Tik Tok' scenario, ordinary people increasingly find themselves suddenly commanding social influence, perhaps for reasons they cannot really grasp, and must improvise when the horde proclaims 'you are the messiah', or, perhaps, 'burn the witch'.
The second is that the old signals gain a newfound importance precisely because it becomes easier to look like an 'expert'. The analogy which comes to mind is that of new money and old money. Old money people will often be much poorer than new money people, but of course, it was never about the money. Old money people will often look down on new money precisely because the money is new. Those who are recently wealthy lack the deep, historic signifiers of real quality and taste, and despite being able to recreate the aesthetics of quality and taste (again, more easily than old money can), these will never be quite right. I can readily imagine a world where savvy professors, who no longer write interesting papers or who have much connection with active research, leverage their 'old money' credentials to become 'gurus' and 'thought leaders' in the age of AI (the likes of Jordan Peterson and Yuval Noah Harari come to mind, the former capitalising on various ruptures around social justice and the latter on the anxiety surrounding AI itself). Who cares if their research could be done by a suite of AI tools--that's new money, and they (the professor) are old money. This is to say, the 'cheapness' or AI actually leads us to elevate the (false) value of the 'authentic' original. This 'old money' scenario is not me proposing a positive future. As above, we must recognise that status and influence are already unequally distributed, and this scenario would just exacerbate these inequalities.
The third, and perhaps the most meta idea, can be arrived at via the question: 'what happens when AI itself is considered the taste maker or thought leader'? I think it is undeniable that some people already overly personify AI tools, talk as if these pieces of software really are thinking, and so on. Is it too far to suggest that some people might start to infer some kind of personality and set of preferences onto large language models? More charitably, maybe one subscribes to the argument that an LLM, being a summation of collective knowledge, proposes the wisdom of the crowds, and that this knowledge is necessarily 'better' (whatever this means) than individual knowledge. Hence, treating an LLM as a 'thought leader' even if one does not believe that the machine is thinking or leading, could make sense (there is perhaps some links to the idea of paternalist AI). Perhaps the silliest thing that a company like OpenAI have done is call their product 'ChatGPT' rather than something which sounds personable and human. They have certainly missed out on an opportunity to make their LLM the next top internet influencer. Perhaps they are scared of founding a cult, but, you know, disciples are profitable? Hence, let us call this the 'cult' scenario.
Regardless of the scenario we consider (and these scenarios are not mutually exclusive--all three are probably playing out in one guise or another), my broad argument here is that we have always been vulnerable to the influence of those who project social power through their claims to qualification, skill, and taste. While AI might change what signals and markers of quality we subscribe to, that we as humans subscribe to these signals and markers will not change. AI, through the proliferation of information and content, is in fact likely to exacerbate our reliance on these arbitrary signals, and this may be seen as a point of vulnerability for each of us. But, that is far too normative of a conclusion. A better conclusion is this: those who are able to clad themselves in the markers of being a 'thought leader' or influencer, who are able to have themselves labelled in the minds of others as 'smart' or 'fashionable' and so on, may regard the era of AI as a playground. While AI prompts us to consider a world where, as the phrase goes, 'humans need not apply', we must also appreciate where, and in what way, AI creates a 'human premium' for those positioned to exploit it.