Why do we Bother Asking Questions Anyway?

It’s fair to say that Behavioural Science is now the subject of a huge amount of discussion and activity within the research industry.  The opportunity to apply the vast amount of academic work in this area to solve commercial and public sector challenges has at last been realised. The work of people such as Daniel Kahneman, Richard Thaler, Cass Sunstein, Dan Arialy and Gerd Gigerenzer has not only contributed to our understanding of human behaviour but have hugely popularised the discipline.

Behavioural Science does not enjoy a very precise definition in the same way as, for example, physics or biology.  But much of the popular conception of this emerging disciple draws on the psychology of ‘judgement and decision making’ as a means of explaining human behaviour.  Implicit within this is the claim that human decision makers have little or no access to the processes underlying their choices.  The notion that self-reporting could be misleading was presented thirty years ago in a highly influential paper by Nisbett and Wilson (1977) [i] who argued that people have “little or no introspective access” to their cognitive processes. Their case was based on a wide-ranging review of evidence indicating that people cannot correctly report on the cognitive processes underlying complex behaviours such as judgment, choice, inference, and problem-solving.

This clearly creates a dilemma for market research as the industry’s methods typically rely on self-report.  There is an assumption that we have access to our inner selves.  As the awareness of Behavioural Science grows, there is growing alarm that the notional value of introspective methods of market research cox31uld be a mistake.  This would mean that we cannot rely on surveys and focus groups to understand human behaviour and we need to turn instead to observed behaviour to derive consumer insights.

However, as ever with the task of understanding human behaviour, it is not necessarily as binary a question as this.  There are still good reasons to ask people questions.  It is too simplistic to assume that we are not able to self-report anything of value.  As many philosophers and social scientists point out, our everyday personal experience tells us that this is simply not true.  We are able to account for many of our behaviours and decide what we want to do in a perfectly sensible way.  The challenge is to understand the qualifications and boundaries of the different techniques for understanding human behaviour.

As market research practitioners, we have a harder job than academics who have the luxury of being able to focus on seeking evidence that supports their particular view of the world.  Boundaries and qualifications are of less important than demonstrating the breadth and depth of their perspective.  Our job is different – we have practical challenges for which we need to find solutions and as such use the best tools available.  There is no point in using a chisel when in actual fact a screwdriver is much more effective.  We need to know when to use a chisel and when to use a screwdriver.

On this point, there is much debate about whether there is a need to continue to ask people questions.  If the argument goes, we are not reliable witnesses of our own behaviours then what value is there in surveys and focus groups.  Perhaps we need to focus on observing behaviours instead?

However, there are five key points that, as an industry, we need to consider before we run the risk of throwing out babies with bath water.  The overall point is that the position is not as clear as can at times be presented and there remains a case for ‘introspective research techniques.’

Are we really as misguided as is suggested?

We need to challenge the premise that Behavioural Science indicates people are uniformly poor judges of their own behaviour:  Whilst the wide variety of heuristics and biases that have been identified are valid we need to consider two points.

First, many academics such as Gerd Gigerenzer believe that there has been too much focus on the downsides of ‘system 1’ processing, that is, the way we make unconscious mental shortcuts to make decisions.  Gigerenzer has a huge evidence base around ‘smart heuristics’ which point to the way in which mental shortcuts can actually offer ‘good enough’ thinking that offer positive outcomes. [ii]  For example, research has identified a ‘collective recognition heuristic’, a simple forecasting heuristic that people’s recognition of names is an effective proxy for their competitiveness.  This has been found to enable naïve people to predict the outcome of football matches. [iii]

Second, there is a range of studies that demonstrate the environment and types of questions used to elicit heuristic shortcomings that fail to materialise outside of some fairly specific sorts of questions/conditions.  The failure to see this means we are in danger of creating a new ‘Behavioural Economics Heuristic’ which is a shortcut that in case of doubt not rely on human self-reporting.  That dramatically oversimplifies reality.  Psychologist Itamar Simonson argues “that whilst “context and task characteristics can impact preferences is not in doubt, some of the most prominent demonstrations of preference construction have arguably had limited relevance and have tended to exaggerate the degree to which preferences are constructed.”  A more recent paper by psychologists Newell and Shanks reviews the literature and supports the premise that the ‘mind’ appears to be much more conscious and accessible to a first-person perspective than many researchers assume. [iv]

Of course for certain choices in certain contexts, we are not reliable witnesses of our own inner states/determinants of our behaviour (see the various studies on ‘choice blindness for example)[v].  But at other times we appear to be very good at determining our behaviour.  We know that surveys often generate high predictive validity of consumer outcomes.  For example, many of the concept, product and copy testing tools used by Ipsos have been shown to be highly predictive of subsequent market behaviour.  There is a more general point that the market research industry needs to be clearer about the types of research questions and techniques that are strongly predictive of behavioural outcomes and where the relationship is less direct.

Sometimes we do actually need to understand what goes on in consumers’ minds

As Matthew Salganik of Princeton University points out,[vi] “Researchers who study dolphins can’t ask them questions. So, dolphin researchers are forced to study behaviour. Researchers who study humans, on the other hand, should take advantage of the fact that our participants can talk.”

He goes on to point out that some of the most important social outcomes and predictors are internal states, such as emotions, knowledge, expectations, and opinions. Internal states exist only inside people’s heads, and sometimes the only way to learn about internal states is to ask.

We might be able to eventually derive that a customer was unhappy about their recent experience by observing the way in which they stop spending money and take their business elsewhere.  But it may be quicker, easier and more profitable to simply ask them.  We will not get there by observation alone.

Part of the point here is that we need to make a distinction between the reliability of, on the one hand, our ability to self-report our mental states and, on the other, what has determined those mental states.  Many of the criticisms of the market research industry confound these two very different points – but in fact, this distinction is well understood and respected by researchers.  We need to make sure that when we ask questions, we ensure that we are asking the right ones.

Every approach has its limitations

Whilst the limitations of one methodology can drive us into the welcoming arms of another, we may slowly start to realise that this one too has limitations.  As such, we need to be careful about what we can learn from simply observing behaviour.

Observational data is an excellent means of developing hypotheses about what drives behaviour – but it requires a human to move from data to insight.  We can do this through the use of analytical / theory based frameworks but this is still fundamentally a subjective process.[vii]   In order to move to something less subjective, we need to do experimental work.  This means we are typically reducing the number of variables we are looking at to something manageable.  This has the unfortunate consequence of not properly reflecting the multi-variable nature of the consumer behaviour we are interested in and thus limiting its value.

There are also logistical challenges of using observed data. In the 1970’s and 80’s in-market testing was widely used for copy testing and new product testing.  However, this fell out of favour for a number of reasons – it was expensive, difficult to execute, too slow and easy for competitors to disrupt your test or copy you.  Even your own sales force could distort results of a new product test (by driving up sales) because they knew it was a test and they wanted a new product to sell.  In-market testing was replaced by survey-based tools which had none of these weaknesses and were found to be just as accurate because they could be better controlled.  In these studies, there was an attempt to mimic behaviour by having consumers make choices from a competitive set (sometimes in a real or simulated store and sometimes just in a survey).   Over time things moved away from a behavioural orientation to gain speed and cost.  These same issues have not changed.

So replacing asking people with observation is not a panacea.  Every approach has limitations.

What model of humans do we believe in?

The fact that we can derive huge amounts of insight from observational data is not disputed [viii] we do need to question what we are missing in an account of human behaviour that is derived from observation alone.   If we are not careful, we are in danger of adopting a ‘Behaviourist’ approach to human behaviour – which is essentially the belief that we do not have any credible form of internal life/self-determination that shapes our behaviour.  This approach has long been discredited – in fact, the whole area of judgement and decision making (which is much of Behavioural Science) has its roots in cognitive psychology – which was a reaction to the failure of behaviourism to provide a compelling account of human behaviour.

But many of the debates currently raging in market research have their roots in these different models of ‘personhood’.  Do we believe humans to be stimulus-response creatures that are driven by learnt associations or individuals whose behaviour is determined by meaning and cultural context?  The answer is probably both.  And there are other models that we can equally consider.  The point is that what we choose depends on a combination of our core values and the questions we are trying to answer.  But to suggest that one model has empirical and scientific legitimacy over others does not only misunderstand ‘how science works’ but over-simplifies the complex reality of human behaviour.

The market research industry is generally poor at articulating the theoretical underpinning of the profession.  But it is a mistake to assume that it does not exist.  Surely the fact that much of market research has traditionally used introspective techniques means we have a model of humans which, at the very least, suggests some level of a stable inner-life.  Which in turn means we have some level of self-determination.

The more evangelical wing of Behavioural Science, popular among critics of market research, is that our inner life is merely the by-product of our neuronal activity.  Philosopher Mary Midgley deals with this in a very eloquent way: [ix]

“Zombies are supposed to be creatures that act exactly like human beings, but filleted ones, with the consciousness removed. This bizarre idea assumes that consciousness is a removable item like an appendix – a sort of paralysed soul, one that has no effect on behaviour. This is the Behaviourist myth.  The most obvious reason why it can’t be true is that so much of our activity is drastically shaped by effort and therefore by attention, which can’t be unconscious. Of course there is also a great deal that is unconscious. But that unconscious part can only work while attention constantly stands by to deal with choices when they come up.”

These points indicate the need for the market research industry to articulate more clearly a view on ‘what it means to be human.’  Do we consider that humans are a set of learned neuronal responses and that our minds are simply the by-product of post-hoc rationalising brain-cell activity?  Or do we consider humans as sentient beings with free-will and consciousness?  Different views are allowed and are implicit in our practices but they need to be surfaced, shared and challenged.    Research agencies need a point of view on these issues but leave no mistake – asking questions implies a perfectly legitimate model of human behaviour, albeit perhaps not a complete one ignoring the realm and influence of the nonconscious.

We need to ask questions better

Finally, it is important to note that the way we ask questions does need to improve in a number of ways.  First, we need to avoid the temptation to ask questions simply ‘because we can’.  Just as a street trader hawking their wares intuitively learns the effective ways of generating sales, so the market researcher learns the boundaries of what it is possible to ask respondent without resulting in spurious responses.  Whilst a huge body of work sits alongside this tacit knowledge to guide best practice, the rise of online interviewing has meant practitioners increasingly no longer hear the respondent as they struggle to answer unreasonable questions.  The industry as a whole need to ensure that proper piloting of questionnaires is budgeted and the time allocated for that to take place.

There is an opportunity to include new forms of indirect questions and time-pressured response techniques, cognitive load methods that allow us to track implicit attitudes and system 1 style processing.  Much of these are not only validated techniques but, because we can include them in surveys, they allow us to develop scalable measures.  Nevertheless, we also need a more thoughtful approach for when, where and how implicit and explicit measures differ.  Too often explanations are highly subjective without any form of theoretical framework to act as a set of guiding principles

But the overarching point to be made is that simply because market research has at times made an easy target for itself by asking the wrong questions does not then mean that the principle is flawed.  The industry needs a better statement of the boundaries of when to ask questions, more to ensure that they are executed well and that we have better policing of this.

Why we need integrated approaches

There are clear limitations of asking questions.  We are not always good at recalling details of low involvement activity, particularly if this happened some time ago.  Our ability to determine why we behave in certain ways is limited.

Good market researchers have always known this and taken steps to adjust for these limitations.  Added to this, we are now in an era where there is unprecedented data available which offers granular information about often very intimate behaviours in a very un-obstructive manner on a longitudinal basis.  Indeed, we can even derive new insights about consumers’ inner lives from examining the patterns in the data.

This calls for what every good researcher knows – we need to triangulate data sources to arrive at solutions we can be confident about. If a consistent picture of a behaviour and the factors influencing it is obtained from more than one source and using more than one method, it increases confidence in the analysis.


Understanding consumer behaviour is a complex activity.  Whilst it is always tempting to look for explanations that are simple, the reality is that we run the risk of simply making another set of mistaken over claims.

We have set out the intellectual and empirical argument for why talking to consumers matters beyond the empty suggestions that we can ‘discover the why’ or we need to ‘listen to the real voice of the customer’.  And in the process, we have started the process of spelling out the need for market research to adopt integrative approaches.  We have the luxury that we can use a variety of frameworks that are not available to academics whose professional reputations centre around developing evidence for their particular point of view.

We are hugely excited about the opportunities afforded by using more observational methods.  Indeed, the market research industry has market-leading thinking and approaches to leverage these valuable new sources of consumer insights.  We are now in an environment where we have a much wider range of data sources where we once often only had survey data.  The challenge we now have is to intelligently and empirically articulate the boundaries of the different sources within our practice – for the strength of any area is not only knowing when it applies but also when it does not.

By Colin Strong

[i] Nisbett, R. and T. Wilson (1977). “Telling more than we can know: Verbal reports on mental processes.” Psychological Review 84(3): 231-259.

[ii] Gigerenzer, Gerd (2000)  Simple heuristics that make us smart. Oxford University Press

[iii] Herzog, Stefan M.  and Hertwig Ralph (2011) ‘The wisdom of ignorant crowds: Predicting sport outcomes by mere recognition’. Judgment and Decision Making, vol. 6, no. 1, February 2011, pp. 58-72.

[iv] Newell, B.R. & Shanks, D.R. (2014). Unconscious Influences on Decision Making- A Critical Review. Behavioural and Brain Sciences, 37, 1-63.

[v] Hall L, Johansson P, Tärning B, Sikström S, & Deutgen T (2010). Magic at the marketplace: Choice blindness for the taste of jam and the smell of tea. Cognition, 117 (1), 54-61 PMID: 20637455

[vi] Salganik, Matthew J. 2017. Bit by Bit: Social Research in the Digital Age. Princeton, NJ: Princeton University Press. Open review edition.

[vii] Gitelman, Lisa (ed.), ‘Raw Data’ is an Oxymoron, The MIT Press, 2013

[viii] Strong, Colin (2015). Humanising Big Data. Kogan Page

[ix] Midgley, Mary (2004) Zombies can’t concentrate. Philosophy Nowevan-dennis-75563

Published by:


Colin Strong is Head of Behavioural Science at Ipsos. In his role he works with a wide range of brands and public sector organisations to combine market research with behavioural science, creating new and innovative solutions to long standing strategy and policy challenges. His career has been spent largely in market research, with much of it at GfK where he was MD of the UK Technology division. As such he has a focus on consulting on the way in which technology disrupts markets, creating new challenges and opportunities but also how customer data can be used to develop new techniques for consumer insights. Colin is author of Humanizing Big Data which sets out a new agenda for the way in which more value can be leveraged from the rapidly emerging data economy. Colin is a regular speaker and writer on the philosophy and practice of consumer insight.

Categories The Long Read