Untangling Implicit Bias

Implicit bias, the notion that we have prejudiced attitudes of which we are not aware, seems to be widespread in many walks of life.  Huge amounts of money are spent by corporations, police forces, NGOs and so on to counter the prejudiced implicit attitudes we hold (Gordon 2016).  And yet, just how certain can we be that these implicit biases actually exist?  And if they do, how much do they actually influence behaviour?

It is worth spelling out up-front that prejudiced attitudes undoubtedly exist and have a terribly corrosive effect on individual well-being, organisational success and societal progress.  But that does not mean we should not ignore the hidden assumptions around the notion of implicit bias – from what is meant by implicit, the tools we use to measure it, and the nature of bias –  as however well-meaning the use of the term, there is a danger it can lead us to look for solutions in the wrong places.  There clearly are some questions to be answered, as outlined below.

How much of our behaviour is actually implicit?

The graphic which is beloved of neuromarketing conferences is that of an iceberg (although even that is not quite what it seems).  This is used to illustrate the notion that the vast majority (typically 95%) of our behaviours are determined by implicit / non-conscious (these terms are used interchangeably) mechanisms which we cannot see (they are below the water).  The small part we can see, in our consciousness, accounts for only a small proportion of our behaviours.  There are two immediate issues with this:

  • When trying to establish where the 95% comes from, there is a paucity of convincing sources of evidence. There are a variety of studies (Greenwald and Krieger 2006; Jolls and Sunstein 2006) which show examples of how our behaviour is determined by the non-conscious mental processes but they do little to help to determine if 95% of our behaviours could be accounted for this way. As a result, 95%, which is presented as an empirically proposed figure, appears arbitrary
  • Consciousness is not binary. Take the action of riding a bicycle for example. As novice bike-riders, riding a bike will certainly feel quite conscious and effortful. However, as we become more experienced, we may or may not be aware of how much we are pushing the pedals and applying the brakes. But these more non-conscious parts of our behaviour move in and out of awareness depending on a number of factors, such as our experience with the task and the external environment.  So surely consciousness is a sliding scale, not something which is ‘either-or’ – which surely questions the binary nature of behaviours being determined by our minds that are either below or above water.

We, therefore, need to question the degree to which ‘implicit’ really is a reasonable explanation of the prejudice that we see.  It could be part of the explanation but the degree to which it helps explain seems to be lacking valid evidence.

Do we have the tools to measure implicit bias?

Even if we do accept that implicit bias does represent a high proportion of our behaviour, there is a huge controversy about the value of the Implicit Associate Test (IAT), which is most commonly used to indicate the presence of Implicit Bias.  The analysis of this is well documented elsewhere [i] but, in summary, a recent meta-analysis showed the IAT as being no better at predicting discriminatory behaviours than explicit measures of bias.  What we measure may therefore simply be spurious.

Another important question to ask concerns the role to which these implicit biases actually influence behaviour. Ultimately, our concern is with the behavioural consequences of environmental surroundings; how we respond and react to our environments. If implicit biases exist but don’t play a detectable role in decision-making, should we care about their existence? We are all brought up within cultures that have certain dominant biases.  We may grow to question these biases and dislike them, but it may be hard, perhaps even impossible, to completely eradicate our learned reactions.  But we grow to learn how to change our responses.  As such, it is fair to point to our reactions as evidence of learned bias but if we have overcome these in our lives then is it fair to continue to point to them?

The Implicit Associate Test has serious question marks but just because of the measurement tool controversy, we should not ignore the wider question of needing to put whatever is measured into a wider context.

What do we mean by bias?

The term ‘bias’ is highly loaded so it is worth picking it apart.  It suggests there is an ideal state where we are free of spurious influences when forming our judgments.  And whilst those reading would hopefully agree that is the ideal, we have to recognise that many may not agree.   They may hold views that they consider perfectly legitimate – and we need to recognise that what is defined as a ‘bias’ in one society, culture or at one time, may not be viewed in the same way by others.  Indeed, it is useful to put into context the recent focus on gender bias. For various reasons, many societies are now challenging the notion of biased views towards men and women. However, a quick reflection over not-so-distant history helps us recognise that this is a very recent legal and social development. Others that hold what we’d call gender-biased views, would not see their view as being biased – and this would be a common view a few decades ago; however, in the currently shared mindset these views are recognised as being socially unacceptable, so those that hold them tend to keep them hidden.

Second, bias in its more formal definition is as a psychological ‘malfunction’, a systematic error in thinking or behaving. However, certain environments might be set up to incentivise specific biases, making it difficult to see the behaviour as being biased. For example, many men, but not exclusively men, consider it perfectly acceptable to dominate meetings at the expense of the more collegiate colleagues that are often but now always women, having been encouraged and rewarded to speak up in meetings in their work-place.  Therefore, if we focus on the dominant meeting behaviours as a bias, it infers a somewhat straightforward processing of de-biasing:  Confront the dominant speakers with their bias, provide training for de-biasing and then they will rectify their behaviour. However, this focus would neglect to the wider issue that dominating meetings might be entirely sensible behaviour in the work-place – the dominant speakers are the ones that get their voices heard, their proposals adopted and so on.

By calling something a bias, this suggests that all actors recognise that there is a normative ideal state we all aspire to.  However, as the above examples point out, some people will disagree with the ideal state.  Others have a complex and subtle set of incentives in place so even if they do see it as an ideal state they are not necessarily willing to lose their position to this space.

If we don’t recognise that ‘bias’ is not seen the same way by everybody, then we run the risk of thinking that if we can address implicit attitudes then we can resolve the problem.

Does implicit bias let discrimination off the hook?

If something is a psychological bias, the temptation is to place the responsibility on the individual rather than the broader organisation or society in which they are situated.  At times, this may be perfectly legitimate, but on the other hand, there is a danger it prevents us from properly examining our broader governing structures, which is much harder to address. As alluded to in the previous section, we often conform to institutional pressures behaviourally, given our social and tendencies. Behaviours that may seem biased or idiosyncratic within individual isolation are perfectly reasonable when viewed within a larger organizational or societal context.

In addition, to be unconsciously biased somehow seems a lesser moral evil than discriminatory.  But when asked to think about it, we may be aware that we are not completely race or gender blind and that we have subtle different ways of responding that we would rather not always reflect on.  As such, when these are identified by others, it is perhaps easier to accept the label of ‘implicitly biased’ rather than discriminatory (which has legal sanction) or prejudiced (which has ethical sanctions).

To what end?

There is no doubt that we have moral blind spots that will no doubt are easier to spot with the benefit of hindsight.  The philosopher Kwame Anthony Appiah highlights our blind spots as being our prison system, our institutionalization, and isolation of the elderly, our destruction of the environment and our industrial meat production.[ii] These are blind spots of weakness – we notice them but we fail to act on them.  But there are undoubtedly blind spots of ignorance that we cannot even begin to see now.  So, as philosopher Gerald Jones points out, there is a need to continuously challenge our self-righteousness and question our moral certitudes, scan our own behaviours and scrutinise our own cognitive dissonances.

Which means that what we do about our prejudices merits some critical analysis.  And this starts from questioning the value of the term ‘implicit bias, not least because the training programmes in this area have a somewhat chequered success record.   But we should also not throw up our hands in despair – at the very least these sorts of programmes raise awareness of the issue.  And many will probably recognise these are simply one part of a much wider programme of activity.

Nevertheless, the point remains that reducing a complex set of social, cultural and political issues to ‘implicit bias’ surely narrows the scope of activity that can address these pernicious issues.

By Colin Strong

Thanks to Tamara Ansons & Stephen Cantarutti for their valuable edits



[i] See both:



[ii] Washington Post September 26th 2010:  What will future generations condemn us for?

Published by:


Colin Strong is Head of Behavioural Science at Ipsos. In his role he works with a wide range of brands and public sector organisations to combine market research with behavioural science, creating new and innovative solutions to long standing strategy and policy challenges. His career has been spent largely in market research, with much of it at GfK where he was MD of the UK Technology division. As such he has a focus on consulting on the way in which technology disrupts markets, creating new challenges and opportunities but also how customer data can be used to develop new techniques for consumer insights. Colin is author of Humanizing Big Data which sets out a new agenda for the way in which more value can be leveraged from the rapidly emerging data economy. Colin is a regular speaker and writer on the philosophy and practice of consumer insight.

Categories Consciousness, Psychology