Humans needed to understand humans

Can a machine make better sense of the world than our fellow humans?  If we pay heed to Google cofounder Sergey Brin then this is likely when he said: “you should presume that someday, we will be able to make machines that can reason, think and do things better than we can.”

While this may make sense in many contexts, does it hold up in the way we understand human behaviour?  Are we really simply destined to be bystanders in commentary about ourselves as AI gets ever better?  This is no idle speculation as increasingly we are seeing the way AI informed tools are used to make sense of our data trails or even to ask questions.

To think this through, let us first take a slight detour to artist Joseph Koseth. His famous ‘One and Three’ installations, involved assembling an object, a photograph of that object, and a dictionary definition of the object.  His best-known work is ‘One and Three Chairs’ which seeks to ask what actually constitutes a chair in our minds: is it the solid object we see and use or is it the word “chair” that we use to identify it and communicate it to others?

The reason for this detour is perhaps clarified by another commentary on chairs, this time made by John Dewey, American philosopher, psychologist and educational reformer. He also talked about the different way in which a human engages with a chair versus a dog:

“A chair is a different object to a being to whom it consciously suggests an opportunity for sitting down, repose, or sociable converse, from what it is to one to whom it represents itself merely as a thing to be smelled , or gnawed, or jumped over….It is only by courtesy, indeed, that we can say that an unthinking animal experiences an object at all – so largely is anything that presents itself to us as an object made up of qualities it possesses as a sign of other things.”

What both Dewey and Koseth are suggesting is that we are not necessarily aware of the meanings we bring to objects and the world in general.  We may assume it is obvious what we do with a chair, but it is only because we live in a world of shared meanings.  Dewey goes on to say:

“Just what is left of an object stripped of all such qualities of meaning, we cannot well say; but we can be sure that the object is then a very different sort of thing from the objects that we perceive.”

We necessarily take for granted the huge range of knowledge, understanding and meaning that we bring to any situation.  It is only when something significant changes that we can see the way in which we navigate a world using shared meanings.  When we visit a foreign country, we can then see the way in which we have taken for granted the unspoken understandings of our culture as we stumble around with one faux pas after another. 

Making sense of species human

What if we turn from chairs to ourselves, how we do we make sense of ‘species human’?  The tools we use have often become metaphors for the way we understand ourselves.  Louise Barrett points out a range of historical examples in her book Beyond The Brain.  Socrates likened the mind to a wax tablet; John Locke considered it a blank slate; Sigmund Freud thought it a hydraulic system.   She points out that “The mind/brain has also been compared to an abbey, cathedral, aviary, theatre, and warehouse, as well as a filing cabinet, clockwork mechanism, camera obscura, and phonograph, and also a railway network and telephone exchange,”

Digital technology, and AI in particular is arguably the latest wave of this, an inevitable next phase on our journey.  This may be of little consequence if this was merely a helpful device to think about the mind, but the implications are broader than this.  Nicholas Carr makes a case for the way technology has a much more holistic influence on humans view of the world and, of course, ourselves. He sets out the way that the printing press moved us from a time of collective learning (and therefore thinking) and one where memory was at the heart of knowledge, to our present era of individualism and information processing as a key mechanism of learning.  The invention of the telescope took us from a time where we understood our world to be determined by natural and spiritual forces to one which is understood by physics and mathematics.  Where once different towns in the same country would have different times, the invention of clocks unified our experience of time and with that ushered in a much greater sense of nationhood. 

Some are arguing that digital technology operates in a way that goes even further, radically reshaping our sense of what it means to be human. Brett Frischmann and Evan Selinger argue that the scale (number of people effected), scope (the range and types of message), influences (power to persuade) and architectural extension (the degree to which the technology fits within and bridges different environments) means that digital is different.

The reflects the view of Neil Postman who argued that electronic communications were moving us from a tool-using ‘technocracy’, to one where humans are effectively in service to tools, Technopoly.  He writes:

‘…tools ought still to be their servants, not their masters.  They would allow their tools to be presumption, aggressive, audacious, impudent servants, but that tools should rise above their servile status was an apalling thought’.

The role of AI

It feels important, therefore we start looking at the role artificial Intelligence has in the way we understand ourselves.   As Margaret Boden points out, AI has two aims.  One is technological:  using computers to get useful things done, often by using methods that can be quite unlike those our minds use.  The other is more pertinent to this discussion: using AI concepts and models to inform questions about human minds.  There has long been a close relationship between computer scientists and cognitive psychologists and neuroscientists.  So already we have moved from technology as a handy metaphor to help us think to one that is assumed to have direct relevance to our conception of ourselves.

If the mind can be described in terms of a computer then eventually it must be possible to build a computer that is like the mind.  The difficulty of this comes from failing to recognise the degree to which our shared meanings are actually critical in so much of our thinking. 

With this in mind, we can start to see the challenge of AI as a means for making sense of ourselves whether as a means to make effective decisions for our lives, a tool that can be used to interrogate data and reveal meaningful insights about us or as a metaphor we can use to navigate who we are.

Because the problem is that to do this effectively there is a need for AI to be able to ‘understand’ the way we live, and this is not how AI operates. AI works well in very formally defined problem domain, what we might call a ‘closed system; where the situation is predictable, measurable and importantly, no context is needed.  No knowledge is needed from outside of the very specific domain in question. 

This is fine if we are operating in a ‘technocracy’ we use computers to get useful things done, often by using methods that can be quite unlike those our minds use.  But perhaps we start to have a challenge when using AI concepts and models to inform questions about human minds.  The danger here is that we can fall into ‘technopoly’.

AI researchers tend to take an engineering led, contextless view of problems which is probably perfectly fine if we assume that knowledge and problem solving is seen as a purely individualistic, mental activity.  But as Alison Adam points out:

“Even ostensibly physical problems, such as the classic example used to tech AI, where a monkey must retrieve a bunch of bananas hanging from a hook on the ceiling by placing a chair correctly and then climbing onto it, are represented in AI in an abstract, mental way/  This is despite the fact that this simple task requires perception, common sense, grasping an object and movement.  The money will know immediately as it moves through the task which bits of the environment change, and what stays the same. Ai would have to recast it in a formal way, with rules and logic etc, to solve it which makes it highly complex despite it being trivial for our monkey”. 

This is what is known as the frame problem, a particular issue for AI.  When presented with an unfamiliar subject, a machine does not ‘know’ what information of that subject is important and what is irrelevant. A certain amount of knowledge about a subject is necessary to make that determination, and without that information in their database, a computer cannot make the distinction.

Of course, if our problem is purely in the physical world then we can see how this makes sense (although even here it is not without controversy as quantum physicists are finding).  But so much of our human world is about ‘context’ to borrow a popular phrase, shared meaning is a feature not a bug.  Philosopher Mary Midgely refers to this in her book Science and Poetry:

“Social institutions such as money, government and football….are forms of practice shaped and engaged in by conscious, active subjects through acts performed in pursuit of their aims and intentions.  They can therefore only be understood in terms framed to express those subjects’ point of view. “ 

A nice illustration of this is offered by Terry Winograd on the apparently simple task of language comprehension, which requires a surprisingly large amount of knowledge. Consider the following two sentences:

The police refused to give the students a permit to demonstrate because they feared violence.

The police refused to give the students a permit to demonstrate because they advocated revolution.

Untangling what the word “they” refers to in each of the two sentences comes easily to a human despite the identical grammatical structure of the two sentences. We can easily make sense of this as we know well if it is the police or the students that are more likely to fear violence or to advocate revolution.  The context in which we operate is everything.

A very tangible example

A paper published by Michal Kosinski and Yilun Wang, reported a machine-learning system they had designed was able to differentiate between photos of gay and straight people with apparently high degree of accuracy. Over thirty-five thousand photographs from dating websites were used along with what was described as facial-recognition software.

When given two pictures – one of a gay person, the other straight – the algorithm was able to successfully distinguish the two in 81% of cases involving photos of men and 74% of images of women. Human judges, by contrast, were only able to correctly classify the straight and gay people in 61% and 54% of cases, respectively.   Following this, Kosinski, went on to make bold claims: that such AI will soon be able to measure the intelligence, political orientation, and criminal inclinations of people from their facial images alone.

Does this mean that AI can tell us things that we ourselves struggle with?  As we saw earlier, does AI mean that we are so hopelessly flawed that we will need to rely on AI to give us a clearer sense of ourselves?   A vocal critic was found in the shape of Princeton professor Alexander Todorov, a leading authority on faces and psychology.   Along with his collaborators he argued Kosinski’s approach was flawed: the algorithms could have been identifying patterns in cosmetics usage, facial hair, eye wear and even the angle they held the camera. Self-posted photos on dating websites, Todorov points out, contain a range of information not relating to the physiology of the face itself.

Todorov, along with researchers from Google tested this using a survey of 8,000 Americans.  They asked a wide range of questions including “Do you wear eyeshadow?”, “Do you wear glasses?”, and “Do you have a beard?”, as well as questions about gender and sexual orientation. The study showed that lesbians use eyeshadow less than straight women do, gay men and women wear glasses more, and young opposite-sex-attracted males are much more likely to have prominent facial hair than their gay peers.

Todorov and his colleagues convincingly showed how these obvious differences between lesbian or gay and straight faces in selfies relate to differences in culture, not in facial structure.   Using a simple calculation using a small number of questions about appearance did almost as well in guessing orientation as the facial recognition AI.

In conclusion

I take this as a cautionary tale that it is easy to assume a machine ‘knows best’ in comparison to the apparently poor grade materials between our ears.  But as we have seen in the sexual orientation study, AI was picking up differences but without being able to identify why.  It seems there was a failure to understand the shared meanings that we have in our culture relating to sexual orientation.  A straightforward process of asking people a number of simple questions revealed what the machine could never understand, that lifestyle and culture is how we communicate a huge amount about ourselves, including sexual orientation.

To understand humans, we necessarily need to be part of the rich, complex, multi-faceted layers of meaning that make up human life.  A report on the way in which AI is used to detect emotion was highlighted by a leading research institute, AI Now; they suggested this approach:

 “…raises troubling ethical questions about locating the arbiter of someone’s “real” character and emotions outside of the individual, and the potential abuse of power that can be justified based on these faulty claims.”

There is much that AI is brilliant for.  Proving theorems, playing chess, or even diagnosing medical scans typically does not require knowledge outside of the very specific domain in question.  But for human behaviour then we need to remember Koseth’s ‘One and Three Chairs’.  Because we are humans (studying humans) we can so easily forget how much there is of our embedded lives that we simply take for granted.

Published by:

colinstrong

Colin Strong is Head of Behavioural Science at Ipsos. In his role he works with a wide range of brands and public sector organisations to combine market research with behavioural science, creating new and innovative solutions to long standing strategy and policy challenges. His career has been spent largely in market research, with much of it at GfK where he was MD of the UK Technology division. As such he has a focus on consulting on the way in which technology disrupts markets, creating new challenges and opportunities but also how customer data can be used to develop new techniques for consumer insights. Colin is author of Humanizing Big Data which sets out a new agenda for the way in which more value can be leveraged from the rapidly emerging data economy. Colin is a regular speaker and writer on the philosophy and practice of consumer insight.

Categories AI, Consciousness, Psychology, Technology