.


Some Problems Aren't Behavioural

Introduction
I am critical of efforts to solve behavioural problems with technical solutions. Indeed, many instances of technological adoption in organisations today respond to organisational (i.e., human) challenges, which for one reason or another cannot be solved through human means (typically, because the person causing the problem is also the person with the power to choose the solution, and we don't like pointing the finger at ourselves).

This post isn't about poor technical solutions. Rather, it's about poor behavioural solutions/diagnoses to technical problems. Because, just as behavioural problems are ill-served by technical fixes, so too are technical problems ill-served by behavioural strategies. By 'technical problems' and 'technical solutions,' I may mean things like changing website functionality. But more often, I mean things like legal reform, regulatory change, economic reorganisation. In short, stuff that changes our relationship with technology, rather than just our experience of it (which, in my opinion, is often what behavioural interventions typically do).

There is a great deal of enthusiasm amongst behavioural scientists to use the field to solve important problems in the world. But their toolbox is, understandably, limited to behavioural tools. Of course, this problem is faced by all professions. Financiers will probably diagnose incentive problems. Leadership experts will point blame at whoever is in charge. Social critics will emphasise malign institutions and cultural trends. We all approach scenarios with our biased perspectives, which I mean in the Simon (2000) sense of our default view of the facts and values at play. So, this is not an attack on behavioural science per se. It just happens to be a field I am more familiar with, and thus more exposed to.

Online Influencers and Dark Patterns
Behavioural science has applications in online retail, for good and for ill. One example which I generally take a neutral stance on is influencer marketing. I have supervised various student projects examining the topic. Invariably, these discussions focus on why this form of marketing works. There are discussions of warm-glow effects (e.g., Andreoni, 1990), parasociality (e.g., Labrecque, 2014), and so on. These areas of the topic are typically the ones which actually interest the student.

But, do these behavioural aspects matter? This is a question which I have found myself, first gently and now more explicitly, bringing up over the years. Much of the 'behavioural science' of influencer marketing isn't especially new. Many of the papers which find themselves cited in these discussions are several decades old, while the discussions themselves are typically (good) summaries of a huge amount of literature on, essentially, why people like one another. Wikipedia informs me that the term 'parasocial' was coined in 1956. That same knowledge aggregator begins the discussion of warm-glow effects with Socrates, with economics grounded in Ricardo. Personally, I recall working on a project about metacognition several years ago, and all the major literature came from the 1990s. It essentially said 'trust matters, and trust is influenced by expertise' (Friestad and Wright, 1994, 1999; Moon, 2010).

This is all to say, before Facebook or Instagram or TikTok, most marketing professors could come up with a reasonable theory for why 'influencer marketing' would work, and it would essentially be identical to the theories which are used today.

Of course, 'influencer marketing' wasn't really around in the 1990s. If it is a phenomenon today, something must explain what is happening. And that something, in my mind, is almost entirely technical. One of the reasons platforms--which economists used to call two-sided markets--work is because they lower transaction costs for disparate groups (Rochet and Tirole, 2004). If I want to buy a product, and you want to sell a product, we could potentially help one another out, but we'd need to find one another first. Platfoms like Facebook marketplace, Amazon, or eBay, all offer a coordinating advantage in this regard. This is very relevant for influencer marketing. Without such platforms, the coordination costs of getting an influencer and a potential consumer in the same place with be inhibative. Perhaps the most famous analogue would be something like the 'Avon Ladies' who would patrol their territory, arranging Avon parties at which they would huck their wares to a group of potential customers. Avon parties also demonstrate another technological advantage for platform-based nfluencer marketing: scale. It is more expensive to pay someone to do a personable sales pitch than it is to put up a billboard. Thus, to make influencer marketing economically viable, the number of attendees to the pitch has to be huge. Imagine an Avon party, hosted by one person, for tens of thousands of attendees. Physically, such a proposition is unreasonable. But via a digital platform, this becomes technically viable, and thus economically viable.

In my opinion, these two reasons explain everything. Before platforms, coordination costs were too high and the scale was too small for influencer marketing to be viable. Platforms lowered costs, increased scale, and thus turned influencer marketing into a more viable sales strategy. No new psychology has really driven this trend. Insofar as influencer marketing is a 'problem' (which I'm not saying it is), to tackle it behaviourally seems strange. People have always been influencing one another, buying based on recommendations, demonstrations, and commentaries (Moon, 2010). If one wanted to end influencer marketing tomorrow, one should not 'nudge' users of a platform. Rather, one should change the technical aspects of the platform, say, by setting a low maximum number of followers, or capping the number of viewers in a livestream.

As above, I don't really think influencer marketing is a problem, though I do have concerns about aspirations to become an influencer. It is a job which only exists when relatively few people do it (Lanier, 2009). It is a cannibalising profession. Yet, for something more on the dark side, we should look at dark patterns. In recent years, my work has focused more on thinking about these deceptive user interface designs. Indeed, my work with colleagues has played a small role in highlighting the alignment between the dark patterns literature and behavioural science. But this is a double-edged sword. Like with influencer marketing, I increasingly find myself roped into discussions about how behavioural science can tackle dark patterns. And, maybe behavioural science can--I know too little to comment authoritatively on some proposals for 'fair patterns' and 'light patterns,' (Luguri and Strahilevitz, 2021) but I would not forestall the possibility that these ideas have potential.

Yet, behavioural factors are not placed as the primary driver by whose job it is to care about dark patterns, namely, consumer regulators. I think for good reason. The UK's Competition and Market's Authority, for instance, has recognised that the kind of 'tricks' which are characteristic of a dark pattern have existed pretty much for as long as markets have been around. Obscuring prices and upselling are not recent inventions. So, why do dark patterns matter, now? Well, for the same reason that influencer marketing now matters--technologies enable scales which have previously been unrealisable. One dodgy merchant with a market stall is regretable; when that merchant can sell to millions of people, we have a risk of significant consumer harm.

To this end, I must ask--are the solutions to dark patterns likely to be behavioural? In the round, I suspect not. I suspect if the primary lever to tackle dark patterns were various nudges and other behavioural techniques, the net result would be a horrific mess of frameworks and taxonomies, legislative cases and a proliferation of definitions. I am, of course, biased here: my own advocacy for regulatory principles as a solution is publicly available. In part, I'm not especially interested in discussing 'how to solve dark patterns' here. In fact, the perspective I am offering probably suggests we can't eliminate dark patterns. What I want to emphasise is that despite behavioural science having a function in the dark patterns discussion, the problem is a technical one. And thus a technical solution, such as regulatory intervention, is likely to be more appropriate. At the least, when behavioural scientists discuss dark patterns, they should be mindful of giving their discipline the appropriate role--as a mechanism, not as a driver.

Misinformation and Political Polarisation
From the outset I should admit that I do not much care for much of the behavioural science work around misinformation and political polarisation. As a brief critique, far too much of it strikes me as a bit condescending. Misinformation scholars, whether explicitly stated in their work, are (in my experience) privately motivated by a belief which essentially boils down to 'people are easily manipulated, and that's why they don't agree with me.' I have no doubts that this does not describe every behavioural science of misinformation scholar, and as above, this perspective seems to be more of a private view than a scholarly one, but nevertheless, it rubs me the wrong way. Though--again, from personal experience--those who write about the drivers of political polarisation, from a behavioural perspective, seem more overt in expressing this sentiment. Many political scientists examine why society is polarised around various issues. I have no doubts that some aspect pertains to bias, or whatever. But (and again, maybe I hang out with the wrong people), I find enthusiasm for integrating history, economics, and sociology into explanations of political polarisation lacking amongst some behavioural individuals.

Yet, I think this personal gripe extends to quite a legitimate critique. I was speaking to a friend several months ago who has been doing various work on political polarisation. They suggested that the apparent increase in liberal values in academia (I am taking their word for this) suggests that academics are becoming more closed-minded. By excluding right-wing perspectives, we as an academy are missing out on something. And while I don't disagree in principle, their perspective that this political shift reflects a behavioural anomaly (e.g., closed-mindedness) struck me as quite odd. Wages are falling, prices--particularly of food and housing--are increasing, while job security in UK HE is poor and the costs of getting to a high level in UK HE are greater than ever, due to higher fees and more competition. With my Marxist hat on, I can't help but wonder whether pathologising swathes of doctors and professors with 'closed-mindedness,' rather than chalking shifting political views up to tangible, material factors, is not a bit of a stretch (for the record, just because someone has a PhD, it does not magically mean they're actually open-minded).

Of course, for the behavioural scientist, linking political polarisation to something describable as a behavioural bias is great. Maybe, if my disagreements with billionaires are only the result of poor mental shortcuts rather than, I don't know, substantially different class interests and economic power, we can all be singing kumbaya in no time. Now, to be fair, this is an exaggeration of the position held by my friend (who is a respected behavioural scientist, and whose writing is often both extremely accessible and internally coherent). Their concern is that we extrapolate far too much from too little information--for instance, that right-wingers are stupid--which I will concede has some worthwhile behavioural underpinnings if we go back again to, say, Simon (1981) (I have heard such extrapolation by called the Simon paradox, but I'm not sure this is a commonly used term). I have no objection with this. But, as above, humans have always been like this. To highlight this as a driving feature of polarisation now is to try and explain variance with a constant. We have to go deeper, and that includes looking at the structural critique.

Now, continuing my desire for fairness, my colleague did also suggest technology played a role. They have observed, for instance, that social interactions increasingly happen online, and that technology has a scaling component which perhaps causes us to lean more on some of these behavioural extrapolation mechanisms. This is a nice perspective, one that ties the technical and the behavioural together. Though, I would and have suggested they must take this much further. Namely, that if we ignore all the economic/sociological factors involved in polarisation, this technological explanation is much more relevant than a behavioural one. As technology critic Jaron Lanier (2009) has emphasised, web2 has morphed human identity into a series of categories. Even our interactions with one another (e.g., like, share) are typically categoric. Humans can like things to various degrees, but our online personas can only like things, or not (also see Scott, 2015). This is not an environment which encourages one to think about two as if two were a whole, rounded individual, rather than a series of categories.

Technology is the variable, behaviour is the constant. If anything, the behavioural tricks which cause us to elaborate from very little information are a survival mechanism for a deeply inhuman online environment. The solution to polarisation, if again it is not a product of structural economic factors, must be a technical solution, such as redesigning online platforms, online interactions, or even the pervasiveness of the online in the offline world. A society that rejected certain uses of the online space--news, political commentary, professional working, commercial advertising--would regard the internet quite differently, and would define challenges like polarisation and misinformation very differently. If we have a grasp of how humans behave, the policy question should be how should we allow technology to be designed and used? It should not be how can we alter/manage human behaviour? (see, for instance, Frischmann and Selinger, 2018)

The story with misinformation is, I think, quite similar. Misinformation has always existed, and insofar as there are novel factors which are potentially driving misinformation, these are generally technical factors. For instance, in a recent talk I gave at a Fringe Turing Institute event, the topic of personalised misinformation came up, something which the abovementioned regulatory bodies are probably concerned with too. In this instance, technology exacerbates already existing behavioural phenomena, and points to a technical solution, not necessarily a behavioural one.

An interesting paper on this topic comes from Adams et al., 2023. In their review of various misinformation studies, they reaffirm that really, information technology is at the heart of the matter. But also, crucially, they note that there isn't really a strong evidential base around the behavioural science of it all. We don't really know how 'believing' misinformation (insofar as one retweets or shares a piece of fake news) effects what that person and others really do in their everyday lives. This reminds me of the findings of the UK's Information Commissioner's Office which found no evidence linking the psychographics firm Cambridge Analytica to voting preferences in the Brexit referendum. A recent piece in The New Yorker is also worth reading, on this matter.

Adams et al. (2023) raise an additional, interesting point which I generally think is worth consideration. Namely, that even if technology has a role in this whole story, emphasising technology as a magnifying glass for behavioural biases ignores the role of history, sociology, philosophy of science, epistemology, and as above, economics. Indeed, it is worth asking ourselves 'who decides what is true?' Only those who have already convinced themselves that they are privy to 'reality' can position themselves as the saviours of those otherwise damned to suffer fake news in their social media feeds (Feyerabend, 1978).

For someone like Feyerabend (1978), if not for someone like Illich (1971), the solution to a 'problem' like misinformation or political polarisation is not for one group to diagnose and treat the other through a prescription of debiasing interventions promoted via TEDTalks and book tours. Rather, it is to actually accept that people can disagree, and to be willing to consider that those disagreements arise for legitimate reasons. This is a much bigger and more humbling ask than just nudging someone to reconsider before they share an article about Joe Biden being a lizard, or whatever.

Some Conclusions
What I am trying to articulate with these various examples is that some--perhaps not all, but certainly some--of the topics that dominate behavioural science today probably aren't all that behavioural. There is a behavioural component at play, though as behavioural scientists love to remind people, behaviour is always important. What matters more is what factors cause the behaviour to be expressed, or exacerbated.

Society has never been perfect, and I do not want to imply an idyllic time prior to the emergence of some technologies. But it seems reasonable to me that if there was a time when humans could be biased, influenced by others, extrapolate from little information, or whatever; if there was a time when these behaviours did not result in the end of the world, then these behavioural 'problems' today are not really behavioural. They are technical, in the sense that they require some adjustment in how technologies, institutions, and laws interact to shape human experiences. As above, it is natural for a well-meaning behavioural science to prescribe a behavioural solution to whatever problem they care about, just as it is reasonable for an anthropologist to derive a solution from anthropology. But what matters--what always matters--is getting the right 'problem representation' to use the language of Simon (1981). One needs to understand the problem before a solution can be determined.

back .