Mind the Nudge

The study of heuristics and biases has taken the marketing and public policy world by storm.  People are characterised as irrationally guided by mental shortcuts, known as heuristics and biases, which lead us to arrive at suboptimal decision outcomes.  The discipline of behavioural economics, which explores and documents these ‘errors’ has spawned a huge literature, that has resulted in no less than two Noble prizes to date.

But just how should we respond to this way of understanding behaviour?   Does it mean that all the ways we comprehended behaviour previously are wrong?  That people have no insight into their own behaviours (and indeed those of others)? That we are all deeply flawed as humans and should instead allow machines to make decisions on our behalf?

Whether we are shaped by factors of which we are not aware, or by personal free will and agency is an argument that has been going on for centuries.  Indeed, the more reasoned answer is of course that both are important. But the extent to which we focus on unconscious explanations of behaviour – and specifically here heuristics and biases, is worthy of closer scrutiny.

I argue that whilst these are a useful means of understanding behaviour, we are in danger of over-emphasising their importance.  To this end, here are a number of watch-outs that we need to observe if we are to avoid an over-reliance on these heuristics and biases.   But also, more importantly, these watch outs start to push back against the notion that humans are irrational and inconsistent.

WATCH OUT #1:
Be aware of replication issues of some famous studies

It is no secret that psychology is in the midst of a replication crisis.  In 2015, the journal Science published results of the first empirical attempt to estimate the reproducibility of psychology.   One key finding was that out of 97 attempts to reproduce a significant result, only 36% of attempts succeeded, inviting doubt and concern as to the validity of many empirical findings in the field.

It is therefore sensible for us to look a little more closely at the studies from behavioural economics (which covers heuristics and biases), and in particular, the key publication of recent years which is Daniel Kahneman’s Thinking Fast & Slow.  Psychologist Uli Schimmack has devised a measure called the R-index which estimates the trustworthiness of a body of research based on the reported sample sizes and effects.  He applied this approach to the studies cited in each of 11 different chapters from Kahneman’s book, assigning grades to each result. Two chapters had R-index scores of 93 and 99 – the equivalent of an A-plus grade for Schimmack. But five other chapters, most notably one on social priming, had scores of below 40—which Schimmack rated an F. Overall, the chapters Schimmack examined had an average grade of C-minus.

Now, this does not look great but perhaps it would be too hasty to say that the observed effects did not exist.  Our behaviour is influenced by many contextual factors so it may be that the replication experiment was subtly different from the original in some ways.  Exploring and understanding this gives us a better understanding of the ways our environment shapes behaviour.

But it does also give us our first lesson.  Do not assume that all the effects published will necessarily be relevant to the particular problem at hand. Scientists are often incentivised to proliferate their findings broadly, motivating them to occasionally exaggerate the impact of their research, stretching its applicability and relevance. A healthy degree of criticism and scepticism with respect to a study’s wide impact is helpful.

WATCH OUT #2:
Be aware of the differences between academic and applied research

The second watch out is that heuristics and biases identified in experiments, at times bear little relationship to everyday life.  At its simplest, this is a function of the people involved in the experiment.  Academics have a ready source of participants at hand, through which they can test their hypotheses – students.  These students are often part of a social group referred to as WEIRDs – Western, educated, and from industrialized, rich, and democratic countries.  There is nothing inherently wrong with testing these somewhat homogenous populations, so long as the results are localized, and understood to be only immediately applicable. However, the results are frequently assumed to be universally true and applicable to much wider populations than those that have been tested. Many studies suggest that this is simply flawed, indicating that various cultural groups perform differently in decision-making contexts than one another, irrespective of factors like affluence level and education level.

Myriad decision-making factors differ between cultural groups, including perceptions and preferences: for example, in studying risk preferences, Weber and Hsee (1998) found that Chinese experiment participants were significantly less risk-averse than their American counterparts in a decision-making task; and Mann et al. (2010) found that Western students exhibited more confidence in their decision-making than Japanese, Taiwanese and Hong Kongese students. Risk perceptions and confidence in decision-making both impact the likelihood of using heuristics and biases in decision-making and also influence the specific biases a decision-maker is likely to use.

Again, this is not to diminish the value of heuristics and biases, but there is a clear lesson.  We cannot assume that heuristics and biases will necessarily be relevant to the population we are exploring.  Market researchers have long understood that there are differences in the way different segments of the population assimilate, interpret and act on information. These factors must be considered when interpreting experimental results.

WATCH OUT #3:
Seemingly irrational behaviours may be logical on closer inspection

The third watch out is that behavioural economics studies can at times offer up apparently irrational responses – but only if we strip out the context.  Take a study published on driving ability.  Svenson (1981) surveyed 161 students in Sweden and the United States, asking them to compare their driving skills to other people. 93% of the U.S. sample and 69% of the Swedish sample placed themselves in the top 50%.  This was taken as evidence of a bias – the illusion of superiority – because of course, not everyone can be above average.

However, if we look at the wider context, then we can understand that ability and capability can be made up of many different dimensions. Your driving can be assessed in terms of how fast you drive, how successfully you avoid accidents, how well you park, corner and so on.  Hence, any question asking how good a driver you think you are will always imply a judgement about the importance of all these different aspects of driving. I may tend to be good at those aspects of driving I consider to be important – I corner well and park efficiently but am accident prone and drive too fast.   Therefore, based on what I consider to be important, I am an above average driver. But other people may select different aspects of driving and they will not unreasonably consider they are above average based on their accident-free record and moderate driving speed. Though empirically, this remains an irrational judgment, since the numerical logic is inherently flawed, the rationale behind the decision indicates reasonable and thoughtful consideration still goes behind these decisions. As a result, we can conclude that it is important to explore why decisions are made, even when they appear to be superficially irrational. The term irrational may not be as impactful as current behavioural researchers might indicate.

Our third lesson is, therefore, that apparent irrationality may not be quite so pronounced if we look at the wider context and meanings that are being brought to the way the question is being considered and answered.

WATCH OUT #4:
Heuristics and biases may only be part of the explanation

The fourth watch out is that we can easily get into binary thinking about heuristics and biases, assuming they are the only explanation for behaviour.  Marketing Professor Itamar Simonson talks about this, exploring how he came to start using a pillow at night after years of not doing so.  He considers that in many instances our preferences are largely determined by the task characteristics, the choice context, and the description of options.  These were the factors that persuaded him to try a pillow in the first place.  But in his paper, he makes the point that to attribute all our behaviour to these factors, makes a significant over-claim.  In his view, we also have more stable inherent preferences that are not always determined by context.  He considered the case that our preferences are constructed to be made with ‘few qualifications and little attention to boundary conditions’.

The fourth lesson is that we need to get better at identifying when our behaviour is determined by environment cues and when it is determined by our internal cues, such as attitudes and intentions. Behavioural research often fails to distinguish between the internal and external factors that influence decision-making, focusing exclusively instead on the role of external environments Practitioners need to think carefully about how to untangle these different influences on behaviour, with the purpose of determining how they interplay with one another to create a non-binary role for heuristics in decision-making.

WATCH OUT #5:
Consider when a cognitive bias may not a cognitive bias

The fifth watch out is that we can think of the same cognitive bias in quite different ways.  Felin et al. (2017), in an excellent review of this area gives the example of inattentional blindness – Simon and Chabris (1999) show the way in which a person in a gorilla suit walking in a film sequence can be missed because the participants in the study were ‘primed’ to count the number of basketball passes.  Kahneman uses this as an example of how we can be ‘blind to the obvious’ (2011), referring to the phenomenon as inattentional blindness. On the other hand, however, if participants were ‘primed’ to watch for the gorilla, and then report on the number of correct basketball passes, they would in all likelihood not arrive at the correct answer either.

Surely we can take a different perspective on primes.  Rather than think of them as a bias we can instead consider them as the equivalent to questions or theories that direct our awareness to those features of our environment among the huge array of things that we could choose to look at.  In the gorilla experiment, we could be asked to look at all manner of things from the hair colour of the actors to the gender and ethnic composition.  Any of these are clear, but only if you are looking for them and not something else. The fact that we miss some is arguably not a function of blindness or bias, but an entirely rational and successful process given what we set out to do; we are merely following instructions and pursuing a current task, a wholly rational activity

The fifth lookout is that by better understanding the meaning and motivations of the people we are observing, we may be able to explain the apparent irrationality of the behaviour.

Overall

The broad point that is being made is that it can be easy to assume that humans are irrational and biased.  But that clearly requires us to judge what the ‘right’ outcome is, a goal that may seem straightforward, but is often complex.  Experimental psychologists like to think that when faced with a question stripped of its cultural context, participants in their experiments do the same.  But as market researchers know, the wider context of peoples’ lives is not easily left behind and it will be used to inform the way questions are answered.

Philosopher Karl Popper offers us something useful here.  He talked about the directedness of perception, contrasting bucket theories of mind with searchlight theories.  Bucket theories represent models of humans where environmental information and the perception of them is automatic.  In this account of ourselves we are passive recipients and processors of the information around us, but bounded in the amount we can assimilate and therefore prone to errors.

In contrast, searchlight based theories of mind assume perception is driven by guesses, questions, hypotheses, and theories that the mind brings to the way it encounters the world.  In this, perception is not about finding ‘truth’ in the environment but humans bring meaning and intention that direct perception and attention.

The notion that our behaviour is largely determined by heuristics and biases suggests a ‘bucket’ theory of mind.  But this ignores the way in which much of our behaviour is intentional, driven by our own and shared narratives and meanings.  We need to get more comfortable with accepting that different models of mind are useful for different occasions.  The challenge is to understand which is most useful for what occasion.

The way we think about humans and their ability to perceive is important. There is an increasingly widespread belief that human decision making is poor, riddled with biases, irrationality and shortcomings.  Unfortunately, this can be used as a reason for disempowering individuals, instead, engineering environments to suit outcomes that others consider best.  Or it provides a rationale for technology to take the place of humans in decision making in all walks of life from offering loans through to teaching children.

I am not suggesting that humans are flawless or the heuristics and biases are useful to understand.  But perhaps if we look more closely, we may find, as psychologist Gerd Gigerenzer suggests, in order to make good decisions in an uncertain world, one sometimes has to ignore information.  This is all part of the essential quality of being human and as such is less about irrationality but more about understanding the human condition.

 

By Colin Strong