Robotic Empathy

The problem with vampires is that for much of the time is hard to tell them apart from us mere mortals.  We may be talking in a perfectly normal fashion and be completely taken by surprise when we suddenly feel the fangs on our neck! Despite our surprise, we are very well aware that whilst vampires may look human they lack the essential qualities of being human.  At the other end of the spectrum, when we watch Shakespeare’s play, A Midsummer Night’s Dream, we have no problem knowing that although Bottom’s head looks like that of a donkey, he is a human being ‘on the inside’. His donkey appearance is, in fact, concealing his human essence.

Do the same principles apply to the way we look at robots?  There is a huge interest in reducing the gulf between humans and machines, in the desire to make it easier for us to communicate with machines.  SoftBank Robotics, which sells Pepper, a ‘pleasant and likable’ robot built in a humanoid form and designed to serve as a human companion.  They claim that Pepper is able to ‘perceive human emotion, and ‘loves to interact with you, Pepper wants to learn more about your tastes, your habits, and quite simply who you are.’

Our tendency to invest feelings in machines

As robotics and digital technology occupy our lives in ever more intimate ways, it is important that we understand the psychology of the human-machine relationship. On the one hand, this allows companies to design machines that are as effective as possible but it also allows us to understand and therefore better manage the potential for unintended social downsides.

At the heart of our relationships is the concept of essences.  Professor Bruce Hood a cognitive neuroscientist has done a great deal of work in this area.[i] His research suggests that when we form emotional attachments to others then we ‘essentialise’ them. This means that we think there’s a property which makes them hard or impossible to replace.  He makes the point that we don’t form these attachments to polystyrene cups or anything which is clearly duplicated.  And it perhaps helps to explain why we can find identical twins rather eerie, or the notion of genetic modification being an attack on the essence of what generates true identity. This notion of essence means we hold deep-seated beliefs about the importance of true authenticity. Hence the value of a work of art both in financial and emotional attachment terms will be severely reduced if you discover it’s not by who you thought it was.

Professor Hood undertook a series of studies [ii]where he convinced children he had a duplicating machine, convince them it was possible to duplicate any physical object. They made boxes look very scientific, using wires and lights, placed an object in one, and ‘activate it’.  After a few seconds, the other box would appear to start up by itself upon which he would you open it up to give two identical objects. So once children were in the mindset their machine could copy, he then tested what he could get away with. He found they were happy to have their toys copied but when it came to a sentimental object like a blanket or a teddy bear, then they typically did not want to accept the duplicate.

In a similar vein, he also did a study asking people if they’d be willing to wear a cardigan, offering a financial incentive. Most people agreed. But then he asked, ‘Would you still wear it if you knew it belonged to Fred West’ (a notorious mass murderer)?  Most people would then refuse to say it felt disgusting or dirty.   These examples show how we imbue inanimate objects with an essence that is hugely important to humans, determining the nature of how we relate.


We have no problem ‘essentialising’ objects – in either a positive or, as with the cardigan example, negative way. However, whilst we may form attachments to robots, are they the same type of attachments that we have with our fellow humans?  Although Pepper may seem human-like, are there some intrinsic properties that robots will never manage to occupy? [iii]

English philosopher John Locke, was an important thinker on the topic of essences. He made a distinction between real essences and nominal ones. Nominal essences are ordinary, common-sense concepts of kinds of things, whilst real essences are deep, unobservable properties that make a thing a member of a kind. The real essence of gold would, therefore, be hidden in its atomic structure, inaccessible to casual observation.  Its nominal essence, on the other hand, is simply a list of the things that we usually associate with gold (yellow, heavy, malleable and so on).

Psychologists Douglas Medin and Andrew Ortony of Northwest University in the US, coined the term ‘psychological essentialism’ to reflect our tendency to essentialise categories of things. Since then, researchers have collected a large body of evidence that humans are enthusiastic ‘essentialist’s. We tend to think of the world as divided into discrete kinds of things, each of which has a real essence.

Biological species are a prime example. It seems like a near-universal human activity to organise the animal kingdom into species. But what gives an animal membership of a species? For example, what is it that makes a certain animal a porcupine? It’s not its quilly appearance as a porcupine without quills is a still a porcupine. We tend to believe tacitly (and often explicitly) that what makes an animal a member of a certain species is less about its outward appearance but instead it is some deep fact about it – in this case, the porcupine essence – even though we might have no coherent idea of what that essence actually is!

The distinguishing essence of humans is empathy

So, what is the essence that we might apply to ourselves?  Because if we can understand this we can then make active choices about the design of machines.  We might choose to imbue them with this essence to make them easier to deal with or, conversely, we might decide not to stray into that territory to avoid negative societal impacts.

The one area that has received a lot of attention is that of empathy.  Computers are unable to gauge the emotional aspect of conversations and empathize appropriately.   Talk to a computer for any period of time and it soon becomes evident that it is a machine – largely because of this lack of emotional intelligence.

It is of little surprise, therefore, that robotics companies have been racing to find ways to make their machine more empathetic.  Hao Zhou at Tsinghua University claims to have developed a chatbot capable of assessing the emotional content of a conversation and responding accordingly.  Rana el Kaliouby CEO and co-founder of Affectiva, an emotion measurement technology company, believes that technologies will become emotion-aware in the next five years. She believes they will be able to read and respond to human emotional states, in just the way that humans do. Her view is that “Emotion AI will be ingrained in the technologies we use every day, running in the background, making our tech interactions more personalized, relevant, authentic, and interactive”.

So maybe we are close to identifying and replicating the essentialism of humans – and if so perhaps robots will be effectively indistinguishable from us.

The hard problem of empathy

Whilst anything seems possible in the heady environment of Silicon Valley, the reality, according to Sherry Turkle professor of the social studies of science and technology at MIT is in fact quite different.  She points out that empathy is a capacity that allows us to put ourselves in the place of others, to know what they are feeling. Robots, it seems we need to point out, have no emotions to share. And they cannot put themselves in our place.

But what robots can do, says Turkle, is push our buttons. When they make eye contact and gesture toward us, they encourage us to see them as thinking and caring. They are designed to be cute, to provoke a nurturing response. She suggests that nurturance is the killer app: “We nurture what we love, and we love what we nurture. If a computational object or robot asks for our help, asks us to teach it or tend to it, we attach. That is our human vulnerability. And that is the vulnerability sociable robots exploit with every interaction. The more we interact, the more we help them, the more we think we are in a mutual relationship.”

But, of course, no matter what the appearance of robotic creatures seems to suggest, they don’t understand our emotional lives. We may find ourselves with robots that appear to be empathetic, but they have not known life. As Turkle points out “Simulated thinking may be thinking, but simulated feeling is never feeling, and simulated love is never love.”

If the essence of humans is empathy, it appears that the quest for human-like robots is ultimately doomed.  A more useful avenue is to explore other more ‘liminal’ relationships – because as we have seen non-human relationships can still have a property about them which is hard to replace.  Roboticists would be far better placed to explore what that looks like rather than the impossible task of replicating human relationships.

By Colin Strong

[i] This is drawn from an interview with Bruce Hood in ‘Big Ideas in Social Science’ by David Edmonds and Nigel Warburton

[ii] This is adapted from

[iii] This is adapted from