More than a century ago the founder of psychology William James, proposed that there were two kinds of reasoning (which he labeled ‘associative’ and ‘true’ reasoning). [i] Even earlier than that, economist Adam Smith wrote that an individual’s behaviour was characterised by a struggle between what he called the ‘passions’ and the ‘impartial spectator’. [ii]
More recently, Daniel Kahneman further differentiated the two styles of processing, calling them intuition and reasoning. Intuition (or ‘System 1’), was determined to be fast and automatic. He considered this to be based on formed habits and, as such, difficult to change or manipulate. Reasoning (or ‘System 2’), he considered slower and more volatile, as it is subject to conscious judgments and attitudes.[iii]
Dual process theory is the term for these Systems and is a critical part of the philosophical and empirical basis for ‘judgement and decision making’, as it provides the explanatory basis for how we use ‘mental shortcuts’ to make all sorts of decisions. The lack of rational engagement of ‘System 1’ means we are able to operate in a semi-conscious (or non-conscious) way, as otherwise, we would struggle to process all the different information inputs.
The popularisation of Kahneman’s work has led to a lot of discussion in the market research industry about the role of surveys. Surely, the argument goes, ‘rational’ questioning involves ‘System 2’ yet we make all manner of low involvement consumer decisions in ‘System 1’. Does this mean that surveys (and indeed qualitative research) are invalid? Should we be trying to make market research more ‘System 1’?
Do we know it when we see it?
In theory, there are many ways in which we can design surveys which allow us to get a more ‘System 1’ response:
- We can put people under time pressure
- We can create cognitive load by giving a mental task (such as memorizing a pattern)
- We can make things ambiguous/disguised so they think they are answering a different question.
All of these techniques can encourage consumers to respond in a less careful, considered way – potentially recreating the mindset that they were in whilst making a low involvement purchase decision. But look at this more closely and you can see it’s not really that straightforward. When we answer a question, take an action, or consider what to do it can be very difficult, if not impossible, to determine whether it is a ‘System 1’ or ‘System 2’ response. This is because:
- There is individual variation in the way in which people choose to use a more automatic vs a more reflective decision making.
- There is huge context variation. I may automatically select a toothpaste but then remember a friend’s suggestion to help with my sensitive teeth and switch.
- People don’t always respond the same way to particular conditions. We can merely create or design questions that encourage that type of response but cannot guarantee that, for example, all people will respond the same way to cognitive load.
In fact, we cannot tell if we have been successful in generating a ‘System 1’ response since we don’t really know what it looks like. Even when we think about things we can get them wrong. Take the Cognitive Reflection Test, which is designed to establish reflective (‘System 2’) versus more automatic thinking (‘System 1’) styles. One of the questions is as follows:
A bat and a ball cost $1.10 in total. The bat costs $1.00 more than the ball. How much does the ball cost?
Many people struggle with the correct answer (which is 5 cents by the way) even after examining it for a few minutes. Which begs the question: if we can’t distinguish a ‘System 1’ response from that of a ‘System 2’ response then is there any way of knowing whether we have actually been successful in achieving ‘System 1’ mindset when we conduct research?
So, there is a methodological case to answer. There is no sure way to generate a ‘System 1’ set of conditions – and even if we can, there are no meaningful tools to determine if we have achieved it.
No question is an island
If we are trying to look at the world in a purely ‘System1/2’ way, then we are in danger of stripping out all the culture, history and learning that makes up how we answer a question. Take the following example by Kahneman to demonstrate ‘System 1’ thinking:
An individual has been described by a neighbour as follows: “Steve is very shy and withdrawn, invariably helpful but with very little interest in people or in the world of reality. A meek and tidy soul, he has a need for order and structure, and a passion for detail.” Is Steve more likely to be a librarian or a farmer?
Most people reply that Steve is more likely to be a librarian than a farmer. This is because Steve, in fact, resembles a librarian more than a farmer. Associative memory quickly creates a picture of Steve as a librarian.
However, what we don’t think about is that there are five times as many farmers as librarians in the US and that the ratio of male farmers to male librarians is even higher. So, the right answer in this instance should be ‘farmer’. The fact is that base rates simply don’t get taken into account and thus we get it wrong. ‘System 2’ has failed to engage. It would appear that our ‘System 1’ minds have led us to an inaccurate response.
But what does this mean? Our culture encourages a certain view of the world in which librarians are seen as shy and withdrawn. We use this learning over many years of being exposed to movies, books, and real-life examples to give an instant answer. It is only if we ignore all this accumulated experience and knowledge and just focus on the question in isolation that it can appear if we are ‘wrong”.
Hence, we are only ‘wrong’ if we look at the answer as an intellectual exercise in isolation from our broader lives. Which rather begs the question of how useful it is to think of survey questions as either ‘System 1’ or ‘System 2’. There is a wider context which influences the way we respond, it is not just a function of the way in which we process information in a fast or slow manner. Our answers to survey questions will often, in fact, have been slowly shaped by many years of learning so for much of the time we can argue they are ‘system 1’ responses.
Self-reporting ‘System 1’ behaviours
We certainly don’t decide what to buy unconsciously. We may not deliberate on our choice of toilet cleaner but we don’t turn into zombies when we are shopping only to be surprised to find it in our bag when we get home.
Having said that, there is no doubt that we have habitual routines that we rarely reflect on, with the reasons for these habits lost in the distant past. Of course, we may not be able to say why we always purchase a particular brand. Maybe it is because it was always the toilet cleaner that was used when we were growing up. Maybe it was due to the last advert we saw. The options are endless. But we are still able to comment on what we like / dislike about the toilet cleaner – we just may not be able to say why we habitually purchase that particular brand.
And even then, we are all capable of talking about our experience with low involvement activities. I can tell you about how I ride my bicycle as although it’s a fairly automatic behavior, we are not zombie cyclists. Of course, some people are better at examining and talking about this than others – which is why some people make better teachers than others. Market research practitioners have long been experts at the careful crafting of surveys to help people explore and report what they do in these less reflective moments, intuitively identifying and challenging the limits of what people can self-report.
The notion that surveys are not fit for purpose for exploring low involvement (‘System 1’) consumer behaviour is missing the point. Behaviours in these mindsets are available for exploration through surveys. We need careful and intelligent survey design which respects the limits of self-reflection. Unfortunately, many survey designers themselves have failed to understand that point, resulting in poor quality research which has not exactly helped them make a strong case.
Both ‘System 1’and ‘System 2’ inform decision making
Of course, the reality is that we bring both ‘System 1’ and ‘System 2’ to all our decisions in everyday life – but to varying degrees. Therefore, when we ask survey questions, just as in everyday life, the response may be a function of ‘System 1’ or ‘System 2’. Direct, explicit questions cannot be assumed to result in a ‘System 2’ response (as we saw from the librarian example). Our experiences have led us to hold a huge range of preconceptions and automatic rules of thumb that we bring to any situation. On this basis, asking if we can make surveys more ‘System 1’ doesn’t really make sense. People already answer questions using ‘System 1’ and ‘System 2’.
We need to start unpicking the relative importance of these in determining our responses but this requires careful analytical experimental work. We can add tools to our surveys that encourage more ‘System 1’ responses but it will hopefully be clear that fully distinguishing ‘System 1’ from ‘System 2’ is no simple task. [iv]
Our minds can speak for themselves
Lest we forget, we must always remind ourselves that we are not zombies following programmed patterns of stimulus-response. As humans, we are instead uniquely able to draw on conscious subjective experience to determine our behaviour. We have ‘minds’ as well as ‘mental processes’.
If we were simply the result of our mental processes then we would be entirely measurable and predictable (albeit at times apparently irrational). We know this is not the case. We engage our minds to consider what we want and what we will do but our minds can also view our mental processes. We can ‘catch ourselves’ in our more routine mental processes.
The relationship between our minds and mental processes is complex and not at all well understood. Market research has always worked at the intersection of ‘minds’ and ‘mental processes’ so we are aware of the degree to which our minds are able to ‘answer for themselves’ but we are also aware of the limitations.
That is why market researchers use tools such as conjoint analysis, multiple regression, and projective techniques. A good survey designer knows that they can ask someone what they like about a cheese sandwich to understand why they bought it at lunchtime (minds). But the researcher will struggle to recognize the degree of impact of advertising (or indeed other factors) on the decision to buy (mental processes).
We need an approach that is a smart combination of measuring minds and mental processes to explore the less conscious aspect of behavior. Survey tools (and qualitative research) measures our minds (or mental states). This is different to behavioural science which measures mental processes. A combination of these can very usefully tease out what we like about cheese sandwiches but also what may have sparked our decision to buy a cheese sandwich.
Deciding what questions to ask and how to ask them
So what can we ask people about? There is concern that introspection (self-reports) doesn’t accurately reflect the ways people make decisions. An influential paper by psychologist Richard Nisbett in the seventies [v] proposed that “when people attempt to report on their cognitive processes, that is, on the processes mediating the effects of a stimulus on a response, they do not do so on the basis of true introspection.”
They presented evidence that such self-reports were invalid because:
- People can be unaware of the existence of a stimulus that influences a response.
- People can be unaware of the existence of the response and …
- People can be unaware that the stimulus has affected the response.
However, as a paper by Prof. John Cacioppo et al [vi] pointed out, this has often been applied mistakenly to self-report measures or mental states such as attitudes, behavioural intentions, thoughts, mental images, recollections, emotional states or reports of behaviour. These are all measures that don’t ask people to self-report on their mental processes. For example, to rate the impact of an advert on how they feel about a product or to report mental content about which they are aware (“What are your thoughts on this product?”) then we are asking about mental states.
In comparison, when people are asked to rate a question such as how persuasive an ad is, they are being asked to report on the result of their mental processes: how much would some material change their attitude or behaviours (for themselves or others).
So back to our cheese sandwich. I can tell you why I like cheese sandwiches for lunch. But I cannot meaningfully tell you the degree to which advertising is important in determining my liking for them.
This important distinction is often confused and can be misunderstood by academic psychologists who assume market researchers are trying to report on mental processes rather than mental states. This is because academics are typically only interested in mental processes – mental states are typically seen as the secondary by-product. So it is hardly surprising that there is a lack of understanding by academics about what market research involves.
This is why rating scales regarding mental states for which respondents are willing and able to report accurately (such as attitudes, behavioural intentions and behaviours that don’t raise social desirability concerns) are perfectly legitimate. Of course, what we do need to avoid are those questions that seek to measure the processes which mediate the effects of a stimulus on a response. So asking about the perceived impact of an advert on my behavior would not be very helpful. Market researchers have long known this.
In summary, people can report what their attitudes are but they may be quite inaccurate when reporting whether their attitudes have changed from a previous point in time or what process led to the attitude they now hold.
There is any number of ways in which surveys can get things wrong. Two key significant errors are:
- Some surveys demand too much reflection by the respondent. With a low involvement product, for example, there is only a limited amount that a respondent will be able to tell the researcher about the purchase of toilet cleaner, say.
- We sometimes ask for detailed explanations of why a respondent bought an item. It is better, in fact, to ask what they liked about the item and what they don’t like about other items and then infer / model other elements. Of course, liking for a cheese sandwich will provide much of the explanation of why we bought it and that falls in the remit of market research. Understanding how advertising may have influenced us, or how the store layout influenced us starts to fall in the realm of behavioural science.
But, we repeat, simply because there are examples of poor survey design (or indeed poor qualitative research) does not mean that the tools themselves are flawed. There is a case for market researchers to place greater focus on the craft of survey design and better understand the tools of its trade.
Should we be making surveys ‘more ‘System 1’. Hopefully, the point has been made that they are already ‘System 1’ as our responses are a complex web of ‘System 1’ and ‘System 2’. Market research, like any mature discipline, has mostly understood its limitations. There is a huge value to be gained by integrating behavioural science with market research. Behavioural science can bring survey design techniques that help push the boundaries of surveys but surveys should not be considered to be ‘System 2’.
By Colin Strong
[i] James, William (1890) The Principles of Psychology
[ii] Smith, Adam (1759) The Theory of Moral Sentiments
[iii] Kahneman, Daniel (2011). Thinking, Fast and Slow. Macmillan
[iv] For more discussion on this see Simonson, Itamar (2008). “Will I Like a ‘Medium’ Pillow? Another Look at Constructed and Inherent Preferences”. Journal of Consumer Psychology. 18 (3): 155–169. doi:10.1016/j.jcps.2008.04.002.
[v] Nisbett, Richard E., and Timothy D. Wilson. “Telling more than we can know: Verbal reports on mental processes.” Psychological review 84, no. 3 (1977): 231
[vi] John T. Cacioppo, Stephanie Cacioppo & Richard E. Petty (2018) The neuroscience of persuasion: A review with an emphasis on issues and opportunities, Social Neuroscience, 13:2, 129-172, DOI: 10.1080/17470919.2016.1273851