Artificially Inflated 

When you look at your recommendations on Amazon are you amazed at how good they are?   Or when you get your Discover Weekly playlist on Spotify do you marvel at the accuracy with which your tastes have been recognised? These are undoubtedly the poster children for the power of Artificial Intelligence (AI) and its ability to personalise at scale.  To be able to anticipate your preferences and make ‘intelligent’ recommendations is surely something truly impressive.  Indeed, there is no doubt that Amazon recommendations are an effective sales tool – 35% of their sales come through this route.

Right now, there is great excitement and anticipation about AI.  Indeed, the Bloomberg Intelligence economist Michael McDonough tracked mentions of “artificial intelligence” in earnings call transcripts and found a huge uplift in the last two years.

AI

But, as ever when a new technology emerges, there is a need to separate out the hype from reality. This is not to diminish the value and importance of AI – much has been written about the myriad of ways that it can benefit brands, governments, and individuals in many different contexts.  But it has long been known that we tend to place disproportionate significance importance on the way technology shapes our society which signals something about humans’ relationship with technology.

Here are five points of confusion about AI which, arguably, all have their roots in different aspects of our psychology.  This is not to dispute the value of AI but ‘AI-ism’ an unwarranted belief in the power of this technology.

Definition confusion

First, there is no clear, widely agreed definition for the term.  Which means that all manner of technology solutions can be labeled as AI in order to enhance the attractiveness of a project.  There was a similar problem for ‘Big Data’, lots of interest and excitement but a poorly defined discipline.  Of course, there are possible broad boundaries for what might be included – such as systems that use human thought processes as inspiration in their creation, possibly using a neural network or deep learning system.  But as long as the term is attractive to business investors then all manner of technologies will be bundled into it.

What does this tell us about our relationship with technology?  In a sense, it reflects a technological determinism that dominates our culture – the idea that technology is a dominant shaping force over which we have little understanding or control.  Humans are often collectively in awe of technology and do not always stop to interrogate it sufficiently even to define terms.

Anthropomorphic confusion

Second is the nature of the ‘interpersonal’ relationship between humans and technology.  The term ‘intelligence’ undoubtedly has human references – so we are immediately applying human characteristics to AI.  Of course, it has long been understood that we tend to ‘humanize’ technology.  This is the ‘The Eliza Effect’, named after a computer programme named Eliza, developed by MIT computer scientist Joseph Weizenbaum. The programme was designed to reflect a psychotherapist, largely by rephrasing the patient’s replies as questions.  Weizenbaum was famously surprised by the enthusiasm of his secretary for interacting with Eliza, despite her knowing to be a computer programme.  He considered that this reflected a ‘powerful delusional thinking in quite normal people’.

But the act of personifying technology taps into a deep-seated anxiety that humans have about their power relative to machinery.  We see that throughout history we conflate human and technology characteristics.  This conflation inevitably leads to speculation about the way in which machine ‘intelligence’ could surpass our own.  This anxiety creates speculation about these possibilities and in doing so disproportionately influences our perception of the powerful nature of the technology.

Measurement confusion

Third, is the way in which we fall into logical traps in the way we evaluate the effectiveness of AI.  Of course, AI results look impressive but on the other hand, it is hard to know how well it performs versus other much more straightforward methods.  We could, for example, show people the best-selling books in the category that they have recently made a purchase. Would this perform just as well?  After all, some of the challenges that we use to judge the effectiveness of AI are not necessarily the toughest.  The answer is that we don’t know because there is no baseline for us to compare this to.  So they may be better but much of the time we simply don‘t know – which means it can be hard to evaluate just how good AI is versus other methods.

This ‘base rate fallacy’ is a well-known psychological phenomenon where our minds are drawn to specific information and are less concerned (than perhaps we should be) about the importance of general information.  Our tendency to do this has the potential to overemphasise the effectiveness of AI versus other, more simple, techniques.

Circular confusion

When AI is offering recommendations, then many of our choices may be driven by the fact that they are recommended (just as any advertising would increase sales), rather than due to them being an inspired insight into our tastes and preferences.  So, the whole process can potentially become circular.  I make my selections because they were recommended and the recommendations are based on my selections.  So, the measuring the effectiveness of AI becomes impossible.

The problem with this is that even the ‘smartest’ of AI engines need to be able to examine purchasing behaviour that has not been influenced by recommendations.   On this basis, the danger for brands is that the quality of recommendation slowly starts to decline as it starts to measure itself rather than ‘pure’ consumer behaviour.  This kind of ‘AI echo chamber’ has potential to slowly but surely erode the quality of the recommendations themselves.

Again, our inability to disentangle the effects of different influences on our behaviour is not surprising.  As technology becomes ever more integrated into our lives then we tend to mentally adjust to it, and fail to spot the very slow changes in performance.  This is a common aspect of human psychology – we notice sudden shifts, not incremental changes.

Confusion about where the intelligence comes from

AI is based on a human intelligence.  So, for example, we may train AI to develop effective diagnoses of illnesses based on huge numbers of physicians’ analytical activity. But perversely the very ‘intelligence’ on which AI is based could result in reducing the number of physicians making diagnoses, as tech-health starts making serious in-roads.

The question we have not asked for this content is ‘what if things change’?  What if we start to face new illnesses?  The challenge then is that AI has not necessarily got the training ground it needs to develop accurate diagnoses – because it does not have magical intelligence.  It needs to extract the intelligence demonstrated by humans, there is nothing intrinsically ‘intelligent’ about AI.

As humans, we often tend to defer to more powerful individuals and institutions.  This deference, whilst understandable, may also lead us to fail to safeguard essential mechanisms, not only for the technology itself but also for our well-being.

In conclusion

Technology does not sit in a bubble untouched by humans – we have an intense and symbiotic relationship with it.  And the nature of our relationship with it has very tangible implications.  We need to be aware of the way in which our very human relationship with AI has the potential to be the undoing of this huge valuable technology.  Because we are in danger of overstating the power of it and seeing our response to it as inevitable.

We should challenge whether large-scale replacement of human skill sets is inevitable; as we have seen, it is not necessarily the case that replacing large numbers of physicians is a good thing.  Brands could ask if it makes sense to offer human based recommendations to large segments of the population for a much lower cost than investing millions in AI technology.  And we need to take steps to avoid having an AI bubble that bursts, as brands start to be underwhelmed by the lack of delivery on the over-inflated promise.

All of this requires us to start taking a careful look at the nature of our relationship with technology so that we can clearly think about the real implications of AI rather than projecting our very human hopes and fears.

By Colin Strong

Published by:

colinstrong

Colin Strong is Head of Behavioural Science at Ipsos. In his role he works with a wide range of brands and public sector organisations to combine market research with behavioural science, creating new and innovative solutions to long standing strategy and policy challenges. His career has been spent largely in market research, with much of it at GfK where he was MD of the UK Technology division. As such he has a focus on consulting on the way in which technology disrupts markets, creating new challenges and opportunities but also how customer data can be used to develop new techniques for consumer insights. Colin is author of Humanizing Big Data which sets out a new agenda for the way in which more value can be leveraged from the rapidly emerging data economy. Colin is a regular speaker and writer on the philosophy and practice of consumer insight.

Categories AI