Our Liminal Relationship with Digital Assistants

Digital assistants (DAs) are taking the world by storm.  Once the preserve of science fiction fantasy, millions of people now use them – whether on a mobile device or via a home set-up.  The way we interact with these devices perhaps says something very important about us and our relationship with technology.  As humans, we are drawn to relate and there is little doubt that this technology encourages this.  But of, course, this is not human-to-human relating, we know that we are talking to a machine even though it often feels we are drawn into a human style of relating.  We call it a ‘Liminal Relationship’ meaning something ‘in-between’.

Brands need to understand the nature of this liminal relationship because as DA’s proliferate it has huge implications.   Because the way we engage is perhaps as important as what we are engaging over.  Just how do these relationships change the relationship with brands?  How should DAs be designed to optimise relationships?  How should DAs be used in different contexts – so not just in the home but also in cars, for example?

A new medium inevitably changes the structure of how we engage.  In fact, all media encourage certain ways of thinking or they demand particular styles of relating.  It can be hard to spot what these are – but set out below are four provocations on the way in which we need to better understand the nature of this strange but important relationship that is rapidly developing between humans and DAs.

The botox effect

When watching people interact with digital assistants, there is widespread use of anthropomorphic language.  We don’t call them ‘it’ – but typically ‘she’ (the gender typically assigned says something in itself). Users often talk about them ‘learning’ and ‘understanding’.  One person interviewed recently talked about how he did not want his DA to ‘let him down’ when showing to friends but that it may happen as ‘she’ was ‘sometimes evil’.

Of course, our tendency to humanise our tools is nothing new. As long ago as the 60’s, the ‘Eliza effect’ documented the way in which a simple computer programme designed to mimic psychotherapy can create an intimate response from users.   But the degree of the humanisation that comes with DAs is arguably something new to human culture.

The challenge for brands is that we all want different things from our relationships.  Some want close and intimate relating, others want more distant, transactional style relationships. If we are placing DAs in such an intimate role in our lives, then there will come a point where users will have these expectations.

At which point the Liminal Relationship breaks down. Robots cannot respond to humans with the full array of hopes, fears, anxieties and happiness.  So, whilst it is tempting for brands to imbue their DAs with human-style responses they will clearly never be human.

The difficulties that then ensure are perhaps akin to the way in which we engage with someone that has botox injections.  It can be hard to know what they feel because they can often no longer express themselves through their faces in the subtle nuanced way that humans are used to.  As such, there is evidence that this can lead to a lack of empathy on their part as they can no longer mirror their friends’ emotional states.  The overall relationship can therefore clearly suffer.

And perhaps so it is with our relationship with DAs – the use of voice has traditionally been a key means of gaining empathy and connection.  The inability of DAs to deliver on this could have implications for brands.  Brands have carefully built warm, human relationships over many years via a variety of channels such as TV advertising.  Whilst users and brands may both want to build emotional bonds through DAs, the liminal nature of the relationship may simply mean that hopes and expectations are not matched by the experience.

The deference effect

We have a very human tendency to defer decision making to authority figures.  In a well-known psychology experiment, participants (‘teachers’) were told to administer an electric shock every time a fellow student (in fact a stooge) ‘learner’ made a mistake, increasing the level of shock each time. There were 30 switches on the shock generator marked from 15 volts (slight shock) to 450 (danger – severe shock).  The learner gave mainly wrong answers (on purpose), and for each of these, the teacher gave him an electric shock.  65% of participants continued to the highest level of 450 volts and all the participants continued to 300 volts.  This demonstrates the human tendency to defer to authority figures to the extent that we can administer what were, for all they knew, life-threatening electrical shocks.

We have an interesting situation therefore when it comes to DAs as these are associated with brands that have huge authority (Amazon, Google, Microsoft, Apple).  It would therefore not be unexpected for humans to defer to their authority of these organisations.  In addition, the way in which DAs are routinely advertising as being ‘smart’ and ‘intelligent’ only increases that authority.

What are the implications for brands?  Perhaps this means that brands need to take great care in the way in which DAs are used in circumstances where deference could be problematic.  Autonomous cars are a good case in point – there are many circumstances where it is clear that the technology cannot currently distinguish the myriad of very human interactions when driving – such as at roadworks where there is a red traffic light, and the self-driving car is approaching the red traffic light, but there is a road worker who waves the people and cars through. How does the car know that it can ignore the red traffic light? If we defer to the DAs then that may be just the wrong thing to do.  Furthermore, it is not necessarily in the interests of brands to have a very deferential set of customers – a creative and dynamic connection is much more likely to result in a long-term satisfying relationship.

I-Robot effect

We inevitably shape ourselves to our technology – we adapt to the tools we use and ultimately that can shape how we see the world.  There is an adage that if you have a hammer then all you can see are nails.  This has always been the case but it is perhaps easier to see as time passes, as shown by these examples:

  • Printing press: This moved our society from an oral tradition to one where stories were shared and hand-written books read aloud, to one where we consumed knowledge individually. This can be considered to have been responsible for ushering in an era of individualisation
  • Telescope: This challenged the notion of us at the centre of the universe and that our lives were dictated by God and instead ushered in an era where we consider that our lives are determined by mathematical laws of nature

So just how might DAs be changing us and what are the implications for brands?  At this point it is hard to be definitive but talking to early adopters of people using DAs, a typical comment may include ‘I have learnt the best way to talk to her now’.   There is often a sense that DAs are training humans to adapt to their ‘needs’ and shaping us to behave in ways that are perhaps more linear and rational and less idiosyncratic and ‘human’.

Brett Frischmann wrote about this when he suggested that over time, humans may fail the Turing Test (which is designed to assess the degree to which a human is distinguishable from a computer) not because of the way in which machines are more human but rather because humans are more machine-like.  So, what are the implications for brands?  Perhaps there are two key elements.

First, if we get used to humans responding in a limited number of ways, then we start to have a hollowed-out view of what it means to be human – we assume that simply because this is how we require humans to operate when they engage with our brand then this reflects who they are.  This may be true to some extent but there may also be hidden depths that brands fail to see and respond to.  So, if we design an autonomous car to do all the motorway driving and require the passenger to take control only when going off and on we may fail to see the way some drivers, in fact, enjoy the driving experience.

Second, optimising the interactions requires an understanding of humans.  If we have a propensity to adapt to our tools then brands are in danger of losing some ‘spark’ from the relationship.  Just as in human-to-human relationships if they are very comfortable and predictable then they can become, well a little dull.  So perhaps brands need to design the ability for humans to be themselves – unpredictable, idiosyncratic, emotional and so on.  Which means that perhaps DAs need to invite this, to poke fun, to leave options open rather than attempting to be omniscient.

The language effect

One of the key ways in which we engage with DAs is through voice.  In a sense, this is a fundamental shift in the way we express ourselves online as traditionally of course when we engage with digital technology, we use text. So just what does this shift from voice to text do to the way we relate to technology and brands?

All media have different strengths – and shape the nature of the discourse. So, for example, we would never have used smoke signals as a means of communicating abstract, complex ideas!  When we use text, we will typically communicate in a linear, structured way to express ideas, make requests and so on.  There is always a slight delay between the thought and the action, a chance to review and revise.  Our written words may not necessarily express the underlying emotional content whether that be sarcasm, humour, anxiety and so on.

With voice, on the other hand, it is often less linear – our intention is often voiced before further refinements are subsequently added so motivations may be more transparent.  And it is a more immediate reflection of our desires, we are not able to edit what we have said in the way we may do with text.  And our voice will often express a range of emotions even if we do not intend that to be the case.  If we are feeling anxious then that can often be easily recognized by the recipient as it can be less about what we say but more about how we say it of course.

The implications for brands are quite complex.  First, we need to consider what we are losing in the transition from text to voice.  The nature of the interaction may be more complex for brands to understand, as needs, preferences and so on are perhaps stated in a less coherent way.  On the other hand, perhaps it is easier to understand the intention of the user more easily – not only through the changed structure of the ‘input’ but also through interpreting the signals given by examining how things have been said.

Voice also perhaps creates a set of expectations of the relationship from the DA user.  We are used to the recipient understanding the nuance of what we say base on our tone of voice.  And what we say is different – we are more likely to be sarcastic for example.  If DAs cannot pick these up then there is potential for frustration or indeed for a misreading.  Just think of the potential harm of a sarcastic comment when driving.  On the other hand, our propensity to make our intentions clear is perhaps an opportunity for brands – so less inference needs to be done to work out what the user really wants.

In summary

It’s tempting to think of technologies as ‘neutral’ such that they are simply tools that humans can choose to use.  But all tools bring with them a new way of relating to the world, a changed agenda.  It’s can be hard to spot what these are and perhaps even harder to offer clear, definitive evidence each time.  However, this makes it even more important that we engage in what Lewis Mumford called ‘disciplined speculation’, where we use the best available knowledge to develop a series of hypotheses.   The area of human engagement with technology is one which is often not always considered by brands when they are developing new products and services. And the failure to inform product development or market strategies with these insights may help to explain why so many of these products and services fail. As technology is an ever more integral part of our lives it is critical that we better understand the very human relationships which determine its success or failure.

By Colin Strong


Published by:


Colin Strong is Head of Behavioural Science at Ipsos. In his role he works with a wide range of brands and public sector organisations to combine market research with behavioural science, creating new and innovative solutions to long standing strategy and policy challenges. His career has been spent largely in market research, with much of it at GfK where he was MD of the UK Technology division. As such he has a focus on consulting on the way in which technology disrupts markets, creating new challenges and opportunities but also how customer data can be used to develop new techniques for consumer insights. Colin is author of Humanizing Big Data which sets out a new agenda for the way in which more value can be leveraged from the rapidly emerging data economy. Colin is a regular speaker and writer on the philosophy and practice of consumer insight.

Categories AI, Digital assistants, Memory