Interview with Prof Peter Ayton
Homer Simpson made an appearance in the nudge book (Thaler and Sunstein 2008), but was a favourite allusion of behavioural scientists long before Thaler and Sunstein published their widely-read book; Homer Simpson slides were being put up at conferences I went to twenty years ago and I remember groaning at the time, “I can’t go to a conference talk now where there isn’t a slide of Homer Simpson being presented by a decision theorist.”
Of course, there is a case to be made that people, like Homer, don’t act ideally under all circumstances, under all conditions and under all the constraints that apply. But, in a sense, how could they? The economic standard of rational decision making is completely impossible for anyone to live up to.
There is a lovely quote in a book by Peter Bernstein called Against the Gods (Bernstein 1998) where he interviews Daniel Kahneman. He says, “who could possibly design a brain that performed as this model mandates?” The economic model mandates that all of us absorb all relevant information immediately, without delay as we instantly grasp what is going on. But this standard is obviously a macro-economic phenomenon; when you start analyzing individuals through a micro-economic lens, it becomes apparent that they cannot consistently live up to those standards.
We constantly judge people’s actions by an impossible standard of performance; so, of course, people aren’t going to be like that. For psychologists, it is completely uncontroversial to assume that people have limitations and won’t be able to do some things. Take memory, for example; people don’t remember everything, but there isn’t a normative theory of memory which says that you should remember everything perfectly.
There is a normative theory of decision making and reasoning, however, which makes this area of psychology quite different. The tradition in decision-making research is to look at some ideal form of logic, or some rational model from economics, and compare it to human behaviour. The data that we report in our papers is the discrepancy with this impossibly idealistic model, which we subsequently try to understand.. And, of course, by that standard, we naturally conclude that people aren’t good decision-makers, when, of course, they are only underperforming according to that standard. Clearly, we do get by. The technological achievements of human beings are extraordinary: we’ve put a man on the moon. We can do all sorts of remarkable things; so how could we do that if we were completely hopeless cases?
Herbert Simon pointed out a long time ago that we satisfy, we don’t optimize in our decision-making (Simon 1947); we look for the best solution that we can find in the time available for us to process the information available. Business schools have seized the work of people such as Kahneman and Tversky concerning our bounded rationality and have suggested that people are irrational. But Kahneman is actually very clear on this, pointing out that of course prospect theory and the heuristics and biases they are not rational models. But the claim is not that people are crazy; the claim is that of course people use heuristics because they have got a limited capacity to process information.
Work by people such as Gerd Gigerenzer shows that people can actually do extraordinarily well when applying simple heuristics (Gigerenzer and Gaissmaier 2011). These heuristics are not necessarily dumb, they are simply imperfect, which is dead easy to show. A lot of psychology experiments that show dubious behaviour have been designed deliberately to do that. It is easy to show that people make mistakes – there are literally thousands of visual illusions, for example – but that doesn’t automatically make you blind.
When you start to understand why these illusions come about, very often you discover the mistake was made as a result of an inference, which, in this particular case, is not valid, but may very well be, in a broad range of other contexts.
These inferences can actually be clever because we are using information to go beyond what we can see. This can sometimes be literal, to make physical judgments, such as how far away things are, which isn’t evident in the data that comes into our visual systems. We can do all of that because we go beyond the information given when we use inferences.
The classic example illustrating this point is the violation of transitivity. Transitivity suggests that If A is taller than B, and B is taller than C, then you know that A is taller than C because physical dimensions like height, weight, brightness and all the rest respect transitivity. A presumption of the rational decision model is that our preferences should respect transitivity too. So, if you prefer chocolate to strawberry, and strawberry to vanilla, the model declares that we must prefer chocolate to vanilla.
Tversky did a very clever experiment at the end of the 60s (Tversky 1969) where he showed that you could construct items, which were rated on 3-dimensions: how intelligent they were, how sociable they were, and how emotionally stable they were. The task asked you to pick up pairs of these items and say which one of the pair you preferred (you are sampling from about a dozen or so and you pick the couple out, apparently at random). In the end, Tversky was able to show that you could easily create exemplars where people preferred A to B and B to C and C to D and D to E, but then contradicted the principle of transitivity in preferring E to A. People frequently made decisions according to this pattern because the differences in intelligence within each pair was relatively small, inviting the possibility of discounting where alternatives are effectively viewed as the same.
Bees, interestingly, also appear to violate transitivity from time to time. A study of bees (Shafir 1994) in an artificial environment, in which they demonstrate preferences for particular flowers, according to the nectar in the flowers, reveals a contradictory pattern of preferences when stimuli are arranged with the intention of revealing errors. It is important to note that bees have been bees for fifty times as long as people have been people, which is a testament to their survival capabilities. And yet they also consistently make ‘rational errors’ because of bounded rationality;
The general rhetoric surrounding bounded rationality is common in everyday situations. You will hear shoppers saying things like, “Well look, these two are roughly the same price so let’s just think about this,” or, “These two are roughly the same, or this is roughly the same type.” People are effectively trying to throw data away when they employ these types of strategies because they feel the pressure of information overload and their own boundedness and struggle to deal with it. They are trying to edit the data to say “look, they are more or less the same, so forget about that.” That approach, of course, leads to imperfect decision making most times.
Rather than using this as evidence of inadequacy, though, it is important to remember that experimenters have deliberately constructed circumstances to maximize the potential for observing the mistakes. As a result, someone like Gigerenzer would advocate using a different standard to judge rationality (Gigerenzer and Gaissmaier 2011). Gigerenzer argues that competent performance, despite constraints, is more interesting to study than subtle imperfections. He effectively asks the question: how do they do that with their imperfect brain with its limited capacity and its sort of slightly dodgy heuristics? He answers his own question, offering a sympathetic interpretation of heuristics and arguing that heuristics, in the context of bounded rationality, are probably, in the end, the best decision-making strategy people have.
Article based on interview with Professor Peter Ayton