Google Should Have A Random Button
In the past, I have suggested to students that recommendation algorithms should be viewed positively. The argument is as follows. There is too much information online today. It would be too costly to search this information without some guide. Recommendation algorithms reduce these search costs. They may not find the best information for you, but they find perfectly adequate stuff given the enormous time savings. This is an argument agree with. I think many people kind of think this way about algorithms, too. Though, I suspect it is less common for the argument to be spelt out explicitly, and that most understanding is more implicit.
Something I want to reflect on, though, is whether these costs are exaggerated. Or, at the least, whether the term 'cost' (or, alternatively, burden) is a bit of a misnomer. I think it is beyond dispute that there is a lot of information available to people today. Probably, in some instances, too much. But, let us consider what constitutes 'too much.' I do not think 'too much' should be defined simply around a person's cognitive capacities--by that standard, ten choices is too much, if one believes in the magic number seven (plus or minus two). People might have limited cognitive capabilities, but this only really matters in areas of 'investment,' by which I mean, areas of life where a decision today will influence one's options in the future. As regards choosing what to watch on the TV, or to listen to on the ride home from work; these I would not define as 'investments' per se. Though, I do not dispute that, insofar as a piece of art can have profound and formative effects on a person, these decisions can, in some instances, be extremely impactful upon the future self. My main point is that, in most instances where recommendation algorithms are used, that we have limited cognition is not enormously relevant.
Rather, 'too much' in these instances is often more of a cost/benefit kind of thing. A crappy video might offer a slight bit of welfare--perhaps simply by passing the time in between undesirable searches. Searching for the crappy video costs a bit of welfare--often the time taken to search (though, interestingly, if one is simply trying to 'kill time,' so to speak, one might re-evaluate whether the act of searching is actually a cost, or whether the cost is the whole activity of watching TV, which might instead reflect some deeper malaise in that person's life). From this somewhat mechanical perspective, one can understand 'too much' as arising when the average welfare benefit of a video is less than the average welfare cost from searching. This can hold even in the special case of just watching the first video that is available, assuming that some videos have negative welfare effects (e.g., because they are boring). The same applies for other pieces of media.
Finally, as an aside, by 'too much' information, I do not mean this moralistically, in the same way that someone might complain of 'too much' violence in TV shows, or 'too much' sex in music, and so on. This rendition of 'too much' implies that the reduction to 'acceptable' levels should not be determined by physiology (e.g., cognitive limits) or individuals (e.g., subjective welfare), but by committee. And, in most cases, such moralistic committees return us a dark age of thought and freedom in whatever domain they target.
With all this in mind, when I say that one should reconsider the costs of 'too much' information, what I strictly mean to ask is whether the act of searching through information is wholly a cost? As above, the use of algorithms can--and I think should--be justified on these terms. More and more, the deployment of AI is justified on these terms, too (though, I think the arguments here are a lot weaker). In both instances, technology is introduced to save people the burden of (fully) exploring the information landscape for themselves, assured (so the argument goes) that such exploration is not really worthwhile. Increasingly, I think we must reject this argument.
Let me start with a loose definition. I could use the term 'algorithm aversion' henceforth, but I think this misses something. Algorithm aversion is a general dislike of algorithms. At the most extreme, I suppose one might see it as a fear or suspicion of algorithms--algophobia, perhaps? Instead, what I want to talk about is what might more benignly be called algorithm rejection. My loose definition: the rejection of algorithmic curation in areas where one determines exploration to be worthwhile. It is not an aversion to algorithms per se--as above, there are good reasons to use recommendation algorithms--but rather, a recognition that sometimes the 'costs' of searching are not costs, and thus the 'benefits' of algorithmic curation are hardly benefits.
Some examples might help. I have not cut algorithms out of my life. One area I use algorithmic curation a lot is the YouTube recommendation algorithm. Here, I do not especially value searching for new videos or video creators. Instead, I regard most videos as disposal, and thus place a high premium on manual searching. The recommendation algorithm is thus, for me, worthwhile. I do not reject it. But I do not view music in the same way. For me, music is something to be cherished. While I fear 'collecting' music devalues the music (as music becomes more of a commodity than an experience), I nevertheless do not want to lose something once I have found it. Algorithmic curation, with its constant search for novelty, alas often results in this. I do not have Spotify, though my fiance does. And I think it quite remarkable that she will listen to the same song on repeat for weeks, to the point that I hate the song (increasingly something designed to be used in TikToks). Then, I will never hear the song again. The reason is not that she suddenly stopped liking the song. Rather, it is that her listening habits are highly curated by an algorithm which, seemingly impermissibly, cycles songs in and out, leaving them as cognitively disposal as the YouTube videos I watch. (Note: that I treat YouTube videos as disposable and music as not is not a slight against the former; others may have the opposite perspective on these media)
So, that is the first reason for algorithm rejection: algorithmic curation often renders disposable that which is curated. This relationship should be evident. Algorithmic curation is a kind of cognitive offloading--the use of technologies and physical activities to reduce cognitive burden. That which I do not wish to invest my mind in exploring is unlikely to be that which I consider worth retaining in my mind (hence, disposable). In instances where one does not wish that which is being curated to become disposable, curation thus incurs a distinct cost which eats into the benefit of reduced search costs. But, as immediately above, one might also consider a second reason for algorithm rejection: that exploring itself conveys benefits to a person.
This is quite evident in learning. Learning is hardly about remembering details, but exploring ideas and drawing links between them. It is the links which convey knowledge and authority; the nodes of information are hardly relevant. But, inevitably, well-established links lead to easily-recollected nodes. Thus, effective learners often employ techniques like mind mapping or annotated literature reviews. On paper, the purpose of such exercises is to create effective notes to organise information (nodes). But, in reality, it is the act of making these resources which contributes to the learning by encouraging one to draw links between materials (links). This is clearly demonstrated in something as simple as a mind map. That one often comes to remember details through these exercises is just evidence of the above statement that well-established links lead to easily-recollected nodes--essentially, associative memory.
That 'it is the journey, not the destination' is important when examining some emerging studies into artificial intelligence. I do not claim to be an expert in AI edtech, and I am sure for every disappointing study I could discuss someone could present a positive story. But I am also an educator, and so synthesise the information I do have through a lens which accords with my personal experiences. Several studies appear to suggest that AI technologies, while effectively offloading cognitive burden, undermine critical thinking. This is hardly desirable of any learning activity. Cognitive offloading can be valuable within a context where a) one lacks cognitive resources; or b) one's cognitive resources are best disposed of in other ways. As much of the above articulates, these conditions apply differently for different people, but often when someone considers something to be disposable. Now more than ever it is vital that people be equipped with critical thinking skills--to prioritise the outcomes of education over the outcomes of learning is to deny a generation their right to be independent minds. That AI technologies might (and, probably, are) entering into education reflects wider challenges--for instance, teaching pressures due to austerity towards education. But one must not overlook the canary in the coal-mine--if students are using AI technologies, disposing of their critical thinking skills in the process, they clearly do not value the act of searching and learning. That is a big problem--one that is not solved either through AI, or proclamations of 'AI bad.' But I digress...
Learning is a big example of the principle of 'it is the journey, not the destination' which encourages algorithm rejection. But this can apply in any area where a person believes a domain to be value, and not disposable. As above, for me, this is music. That I go out of my way to engage with music in non-algorithmic ways reflects my desire to be exposed to all manner of things. I listen to a lot of shit music. It is uncommon for me to find something I consider good. Therefore, someone who does not like music, or is rather apathetic, might--rightly--prefer the Spotify algorithm. But for me, the need to explore is an opportunity to listen to music I would never otherwise have listened to. Or, to discover genres and musical techniques I had not really thought about, before. For instance, I would not say I am a jazz fan, and I know essentially nothing about jazz. But, through algorithm rejection I now listen to a lot more jazz than I ever did previously, so much so that I can confidently say: I like jazz (insert Bee Movie meme here).
When 'search costs' actually deliver benefits, the promise of an algorithm is reversed, and the case for rejection rises. This is not necessarily a new idea. I will attribute the following thoughts to Sen (2002), as I am currently reading some of his work, but I know these thoughts are part of a wider tradition. To chooose an option from a set of options is not the be all and end all. Yes, one wants to choose something because that thing provides one with some benefit (call it welfare or utility). But, as Sen notes, there are times when benefits arise through the whole process of choosing. This is not to say that the hedonometer starts running the moment one makes a decision, rather than the moment one simply consumes their choice. Rather, it begins (in a manner of speaking) the moment one faces any choice at all. The process of reasoning through options, of learning about choices, perhaps of discussing and negotiating with others; all these things contribute a benefit to the person in excess of the welfare which comes either from receiving a chosen outcome or consuming it. For instance, imagine a board game. It feels good to win (receiving an outcome). It feels good to be a winner (consuming an outcome). But most people play games because they are fun to play--that one wins, and one enjoys a reputation as a winner; these are only additions to the joy which one derives from the process itself, namely, the whole game.
Algorithms automate many of these processes. Yet, as above, this is only really a cost when we care about the domain in which algorithms are deployed. In recent years, I think people have come to realise that they care a lot more about aspects of the internet then they might have perhaps realised. The imposition of algorithms, and more often the changing of algorithms to the point of diminished benefits, has catalysed (for some) a more critical assessment not of algorithmic curation per se but of how one feels towards an increasingly curated online space, and cultural life. Frischmann and Selinger (2018) talk about how technology is creating a 'frictionless world' in which we are all increasingly pliable. They did not develop this idea exactly in relation to this point about algorithm rejection, but I think it probably applies--people are beginning to realise that things that they actually care about (which is to say, which they do not think should be disposable and for which the journey matters) are subject to algorithmic curation, leading to a net negative on those things. That algorithms tackle the 'too much' information problem and reduce 'friction' is not actually, always, a good thing.
I could talk about BlueSky. It does not currently have an algorithmically curated feed, and instead a) only shows posts from those one follows, and b) shows posts chronologically. This is interesting. It may reflect a belief that what people say online matters, and that exploration of online discussion is a worthwhile exercise in itself. To this end, the absence of a curation algorithm aligns with, if not being demonstrative of, algorithm rejection (I have no way of saying how many people find BlueSky appealing because of algorithm rejection, though I do). Though, what is perhaps more interesting to this discussion is the latter part: posts are shown chronologically.
I remember when Facebook switched from a chronological feed to a recommendation feed. I remember when Instagram did it, too. I remember people not being especially happy about it. I suspect the arguments at the time were kind of like those above, though likely with more polish. Something like (read in Mark Zuckerberg's voice): recommendation algorithms will help you find more content you love, improving user experience on our platfrom. Today, algorithmic recommendation, rather than chronology, is king on Facebook and many other platforms. And this kind of belies a point. You cannot not have some kind of curation. Things have to go somewhere. Even if arranged randomly, that is still a form of curation. This argument draws on a niche argument in behavioural science around the inevitability of choice architecture--you cannot not present options, so you might as well 'architect' how those options are presented, rather than leave them. And, yes, I agree, sometimes. But algorithm rejection challenges us to think about the limits of inevitability. David Bowie, for instance, used to scatter song lyrics randomly as a means of inspiration. In a sense, he 'curated' the lyrics, but this is quite a different perspective on 'curation' than what most algorithm designers (or choice architects) have in mind. Chronology is another form of 'curation,' but again, not in the same sense as algorithmic recommendation. As a default when of arranging posts, putting the most recent first kind of makes sense.
Algorithmic recommendation involves the engineering of choices in the same way that choice architecture in behavioural science does. And, this is sometimes a good thing. But algorithm rejection, and the principles I have tried to outline here (that people care about the disposability of their choices, and that people care about the act of choosing itself), suggests one should be more open minded to the crudeness of, say, chronological ordering or even randomness. It is telling, for instance, that Google has the 'I'm feeling lucky' button, which takes someone to the top link of a Google search, but they don't have a 'random' button. Sure, it would probably be really unhelpful. It would also be quite fun. I hope to see more algorithm rejection for this reason.