Mind hacking

We are in greater danger of a technological invasion of our minds than we are from a foreign army taking over our country, according Yuval Noah Harari when he recently spoke at the World Economic Forum at Davos in Switzerland.  The author of Sapiens: A Brief History of Humankind went onto say:

“There is a lot of talk of hacking computers, smartphones, computers, emails, bank accounts, but the really big thing is hacking human beings, if you have enough data about me and enough computer power and biological knowledge, you can hack my body, my brain, my life.  You can reach a point where you know me better than I know myself.”

Of course, hacking is not a new concept: we have long hacked down trees for example.  But since the advent of digital technology, the term has come synonymous with the way someone uses technical know-how to break into computer systems and access information that would otherwise be unavailable to them.  Using the term hacking in this context therefore not only means that people are typically doing this for nefarious purposes but, importantly, see the human mind as something akin to a computer system that can be broken into. 

Hari is not alone in his dire predictions about the vulnerability of the human mind to ‘disinformation’ as it is more commonly called, or ‘fake news’.   The World Health Organisation has coined the term ‘infodemic’, defined as “a tsunami of information—some accurate, some not—that spreads alongside an epidemic”.  All of the major tech companies have faced calls for more active management of the problem, not least Facebook who supplied user profiling data to Cambridge Analytica scandal in which personality profiling was applied to serve personalised advertising.  This technique was claimed to have been used in order to gain influence in US presidential elections and the UK’s Brexit vote.

In a recent paper by leading psychologist in the area by psychologist Sander van der Linden, the rapid spread of online misinformation is considered a growing threat to democracy and to have serious consequences for a variety of societal issues, ranging from climate change and vaccinations to international relations.

While the implications of disinformation have been well documented, the question that arises is that if we can be ‘hacked’, what does this tell us about humans?  Quite a lot according to Harari who suggests this has huge implications for how we see ourselves and means we should debunk the notion of ‘free will’, that humans make choices of their own volition and agency.  His position is that this never was scientific reality, but a myth inherited from Christian theology. He suggests that Christians developed the idea of “free will” to explain why God should punish or reward us: if choices are not freely made then there would be no basis for this.

Can we really be confident that humans are at a point where we no longer have free will?  And are we really at the point where our minds can be ‘hacked’ in this way?  There is a lot to unpick here.   This not only needs some philosophical unpicking but also, we need to examine if there is evidence to support the claim.   The sight of those that broke into the US Capitol building in January 2020 taking pictures of themselves to post on social media alongside the invocations to violence by Trump on social media can make it appear as if technology is manipulating behaviour.  But is it really that simple?   Philosophically do the claims for ‘mind hacking’ stand up to scrutiny but also does the evidence support it?

The philosophical limits to mind hacking

Before we breezily proclaim that minds can be hacked, it is worth noting that being able to understand the minds of others has been one of the biggest challenges we face as humans.  In a sense, we can never really be absolutely sure we know what another person feels or thinks. John Locke posed a very simple question to illustrate this point:

“How do I know if you see red the same way that I see red? What if you saw all red things the way I see green, but just call those items red?”

This simple thought experiment helps us to see that it is impossible to know if the experiences of one person are wholly shared by another. Of course, we cannot deny that understanding the thoughts and feelings of another is possible:  indeed, it is highly valued in many fields such as medicine and education. In medicine, empathy for patients’ experiences (symptoms, feelings) means that doctors are able to diagnose and treat patients not only more effectively but with greater compassion. Empathy means that teachers can understand the problems and needs of each student.   Indeed, a lack of empathy for others means we could be considered an amoral sociopath.

The broader point is that the boundaries of empathetic understanding are rarely clear:  on the one hand our unique experience, thoughts and feelings are particular to us individually and it can be insulting to be told by another that they claim to know how we feel.  On the other, it is clear that for us to agree what red looks like then we must have some degree of empathy, ability to understand each other’s thoughts and feelings.

Nevertheless, the way in which we can claim to ‘know another’ is not straightforward.  There is a great deal of contested ground between, on the one hand being able to ‘feel the pain’ of another and on the other hand to be able to engage effectively with a common ability to share concepts and respond in similar ways.  If we did not have this then, as Ludwig Wittgenstein points out, language and communication would simply not be possible.

So we can see that philosophically there are reasons to be a little cautious with the notion of mind hacking; if we are not even sure that we see red in the same way then more ambitious claims to mind reading should surely be treated with caution.  But philosophical considerations to one side, is there evidence to support the notion of mind hacking?

The technical limits to mind hacking

In 2012 an article by writer Charles Duhigg appeared in the New York Times outlining the way a team at US supermarket Target worked on a model that was intended to predict which of their shoppers were pregnant. An anecdote was used to illustrate the efficacy of their approach:

About a year after Pole created his pregnancy-prediction model, a man walked into a Target outside Minneapolis and demanded to see the manager. He was clutching coupons that had been sent to his daughter, and he was angry, according to an employee who participated in the conversation.

“My daughter got this in the mail!” he said. “She’s still in high school, and you’re sending her coupons for baby clothes and cribs? Are you trying to encourage her to get pregnant?”

The manager didn’t have any idea what the man was talking about. He looked at the mailer. Sure enough, it was addressed to the man’s daughter and contained advertisements for maternity clothing, nursery furniture and pictures of smiling infants. The manager apologized and then called a few days later to apologize again.

On the phone, though, the father was somewhat abashed. “I had a talk with my daughter,” he said. “It turns out there has been some activities in my house I haven’t been completely aware of. She’s due in August. I owe you an apology.”

The story subsequently gained a huge amount of exposure in the press and created a lot of currency for the power of data analytics to reveal things about us that, as is clear from the story, we ourselves or at least those close to us may not even be aware of.  However, Colin Fraser, a data scientist at Facebook unpicked this story so that we can see it is not as revealing as it first appears.

First, Fraser suggests that there is no way to confirm that Target’s algorithm actually predicted the girl was pregnant.  Although the girl received a coupon book including maternity items, Target likely sent out many coupon books to many people. So if Target simply sent out maternity related coupon books at random, then this very same scenario could have still taken place; Some of the coupons of the randomly assigned coupons books would certainly reach pregnant women by chance, and some of those pregnant women might have had fathers who didn’t know that they were pregnant, and one of those fathers might have gone to a store to complain.

But even if Target’s algorithm did on this occasion predict that this girl was pregnant, the anecdote tells us nothing about the effectiveness of the algorithm at predicting pregnancy.  It merely shows that the targeting worked at least once.  We know nothing about how many pregnant Target customers didn’t get these coupons – information that we would need to assess the accuracy or otherwise of the accuracy of the algorithm. 

The error of logic in this story is to go from the specific to the general:  simply because one person was correctly identified does not mean this was the case more broadly.  The way in which evidence was weighed feels designed to support a preconceived assertion:  it also dos not inspire confidence as Fraser points out that the anecdote is attributed to “an employee who participated in the conversation”. 

Fast forward to 2018 and the UK’s Guardian newspaper broke the story that voter-profiling company Cambridge Analytica had collected data for over 50 million Facebook users. Cambridge Analytica claimed that they had gathered this data for “psychographic” profiling tools, which meant they were able to customise political ads to users’ personality traits. Whistle-blower Christopher Wylie was quoted as saying: “We exploited Facebook to harvest millions of people’s profiles and built models to exploit what we knew about them and target their inner demons. That was the basis the entire company was built on.”  The ensuring uproar means that Cambridge Analytica is now a household name.

However, just as with the Target example, evidence is elusive that the methods used to target political advertising actually had anything other than a negligible impact on voting behaviour.  Indeed, the UK’s Information Commissioners Office concluded their investigation by saying Cambridge Analytica wasn’t doing anything particularly unique: information commissioner, Elizabeth Denham, told the UK Parliament that “on examination, the methods that SCL (a company that is corporately interlinked with Cambridge Analytica) was using were, in the main, well-recognised processes using commonly available technology”.

As The New Statesmen’s Laurie Clark put it, “this assessment jarred with reporting at the time that had imbued CA with Derren Brown-like abilities to tinker with perceptions and sway credulous”.  Clark goes onto say that while there is some evidence that psychographic marketing can work in the context of consumer goods, research by Brendan Nyhan has pointed towards this not being currently effective for political advertising.  Further, personality profiling aside, research shows that political micro-targeting is simply not very effective at persuasion.

Indeed, Timothy Hwang, who formerly worked at Google on policy, suggests that internet advertising is in danger of a “Subprime Attention Crisis”, being overvalued due to the opaqueness of the market, and that very few of those involved are willing to point out that it has little impact on behaviour. 

The point here is not to evaluate the effectiveness or otherwise of data analytics and ability to micro-target and change behaviour, but rather call attention to the way that the technological invasion of our minds is perhaps more of a moot point than might first appear.

Artificial Intelligence is the technology that is used for ‘programmatic advertising’ that places personalised marketing communications into websites that we are viewing; these are dynamically customised to reflect the wide range of data held about us individually.  Tech advocate and critic Jaron Lanier suggests we fall into logical traps in the way we evaluate the effectiveness of AI.  Of course, AI results look impressive but on the other hand, it is hard to know how well it performs versus other much more straightforward methods.  We could, for example, show people the best-selling books in the category that they have recently made a purchase. Would this perform just as well?  The answer is that we don’t know because there is no baseline for us to compare this to.  So they may be better but much of the time we simply do not know – which means it can be hard to evaluate just how good AI is versus other methods.

So it seems that the evidence that our minds can be ‘hacked’ by technology is not as robust as it first appears.  So how can we better understand the way in which the ‘infodemic of fake news’ is apparently changing our behaviours?  To do this we need to look at historical examples which predate the internet.

Disinformation is nothing new

There is a long list of ways in which minds appear to have been hacked that predate the technology which is often held responsible for misleading us.  Going back 2000 years, the Roman Republic was in danger of civil war between Octavian, the adopted son of Julius Caesar, and Mark Anthony, one of Caesar’s trusted commanders. 

To get public backing, Octavian claimed Anthony was unfit to hold office not only because he was always drunk but didn’t respect Roman values of faithfulness and respect due to his affair with Cleopatra.  Octavian promoted his messages using poetry and slogans printed on coins. Octavian eventually became the first Emperor of Rome, ruling for over 40 years.

In a very different example, news media in several West African countries have periodically reported men and women being beaten, sometimes resulting in death, after being accused of causing penises, breasts, and vaginas to shrink or disappear.  Vivian Dzokoto and Glenn Adams writing in the journal Culture, Medicine and Psychiatry give one example of a 17-year-old male who “claimed that he had gone to fetch water for his father and was returning when [the perpetrator] came behind him, touched him and immediately he felt his penis shrink until it was no longer visible.”

In a more recent example from 1999, hundreds of people contacted the Belgium’s National Poison Centre when they became ill after drinking Coca-Cola.  However, laboratory analysis found the complaints were unsubstantiated although in some bottles there were very low, non-toxic but odorous, amounts of hydrogen sulphide. 

Se we can see that while the way in which the form and channels of ‘disinformation’ may be different from what we are used to today, we can see how people in communities have learnt information from others:  what they have heard has influenced their response.  This raises a challenge with the term ‘mind hacking’:  it does not give us any means to examine why people are believing what they have heard from other people.  At best it suggests humans are simply not to be trusted given they are prone to bouts of irrationality or even hysteria.  We receive a vast array of information every day:  the question we need to understand is what the mechanisms are the means certain sorts of information have a much more dramatic impact than others? 

The wider context of our lives

To examine this, we can look at an area which is understandably of great deal of concern, the anti-vaxx movement.  In a recent article in UK newspaper The Guardian, journalist Arwa Mahdawi cited a range of surveys which indicate women are less likely to say they are likely to seek a vaccination for COVID than men, despite global research suggesting that women are more likely than men to take the pandemic seriously and comply with public-health regulations.

Is this due to the anti-vaxxer movement, which is dominated by women?  Recent research from researchers at George Washington University found members of online communities previously “undecided” on vaccines (such as pet lovers or yoga enthusiasts for example) are increasingly linking with anti-vaxxers.  The difficult thing is though, should we really ascribe the beliefs of these women to the mind-hacking skills of the anti-vaxxer?  As Mahdawi puts it:

One reason women are disproportionately attracted to alternative medicine is because traditional medicine hasn’t exactly done a brilliant job of earning their trust. Women’s health concerns are often dismissed: one study found women with severe stomach pain had to wait 33% longer to be seen by a doctor than men with the same symptoms. Women’s health problems are also massively under-researched: there is five times more research into erectile dysfunction than premenstrual syndrome, for example, despite the former affecting 19% of men and the latter affecting 90% of women. In the US, medical research trials weren’t required to include women until 1993 because women’s bodies were considered too complex and hormonal.

And she goes on to make the same point that fewer than half of Black American adults say they intend to get a coronavirus vaccine, compared to 61% of white people.

Black Americans have been experimented on (one word: Tuskegee) and forcibly sterilized. Black pain hasn’t been taken seriously by the medical establishment because of enduring racist notions that Black people have thicker skin than white people. Minorities are also underrepresented in clinical trials, which can result in technology and treatments that don’t meet their needs. Pulse oximeters, for example, which measure the oxygen levels in your blood and have been increasingly in use due to the pandemic, can give misleading readings in people with dark skin. A new study has found that misleading results happen three times more often for Black people. Probably because the colour of light used in the pulse oximeter can be absorbed by skin pigment. Which would have been something researchers would have caught straight away if they took diversity seriously.

We can see that anti-vaxx is not simply a problem about Russian disinformation bots – these disinformation messages only have a receptive audience because there is a fertile ground on which they find a receptive audience.  If we go onto consider Koro, as Dzokoto and Adams point out, there is a strong link with the ideology and beliefs of the culture, concerning fertility, procreation and sexual performance. The outbreak of hysteria regarding Coca Cola has been attributed to a background anxiety at the time over concerns about the safety of food products.  We can see the way that what might at first glance seem to be irrational behaviours do in fact have their roots in a wider set of collective beliefs held in the population. 

It is all too easy to fall into the notion of a false dichotomy between digital and ‘real life’:  it makes it easier to absolve ourselves of the responsibility that this is something for which the causes are at least of our making and are our responsibility. 

For example, some reports on the attempted a coup by the mob storming the Capitol building in the United States in January 2021 suggested the rioters were the unwitting victims of social media manipulation.  This heralded many calls for greater controls on social media.  However, what is less reported on is the way in which even after this astonishing display, many politicians still tried to overthrow election results with their own Congressional votes.  Sociologist Zeynep Tufekci writes:

“An overwhelming majority of the GOP representatives in the house spent the day in lock-down and came back and promptly voted to overturn the election. In a future scenario …. It’s absolutely plausible to me that even more Republicans would have joined this blatant attempt to overturn the election and that their base would mostly have been fine with that.”

The deus ex machina event at once crystallised in a very salient way the potential threat to normalcy and order but at the same time drew attention away from so much of US politics.  As technology researcher Nathan Jurgenson pointed out about the shock many have expressed to see this happen in America:

“They apparently haven’t paid any attention to Trump rallies before. This incident was hardly different, aside from the special access the participants were eventually granted to a government building.”

So perhaps we can consider that ‘mind hacking’ via social media is not quite so simple as it is often characterised:  again we cannot ignore the wider context in which it operates. 

Addressing disinformation

Much of the diagnosis of the problem of disinformation, and its solutions, is stuck in the notion of ‘mind hacking.  Let’s look at one solution that has had a lot of coverage, an online game developed by psychologists Jon Roozenbeek & Sander van der Linden.  In the game people play the role of propaganda producers:  the aim of this is to help them to increase “psychological resistance” to fake news.

The game involves players fuelling anger and fear by appearing to manipulate news and social media: players can deploy twitter bots, photo-shopped evidence, and can incite conspiracy theories to attract followers. Players earn six “badges” to earn in the game, each reflecting a common strategy used by spreaders of fake news: impersonation; conspiracy; polarisation; discrediting sources; trolling; emotionally provocative content.   Sander van der Linden, Director of the Cambridge Social Decision-Making Lab said:

“We wanted to see if we could pre-emptively debunk, or ‘pre-bunk’, fake news by exposing people to a weak dose of the methods used to create and spread disinformation, so they have a better understanding of how they might be deceived.  This is a version of what psychologists call ‘inoculation theory’, with our game working like a psychological vaccination.”

To gauge the effects of the game, players were asked to rate the reliability of a series of different headlines and tweets before and after gameplay. The study found that perceived reliability of fake news before playing the game had reduced by an average of 21% after completing it.

Clearly this approach has some elements of the solution but nevertheless has challenges, not least how to persuade people to actually play the game in the first place.  The bigger problem however, is that the game makes the mistake of positioning disinformation as a a manipulation problem rather than a reflection of wider social and cultural factors determining what can seem plausible based on what is known and shared by our cultural reference points. 

The issue is not merely one of how we go about processing information:  that order of things is wrong.  How we critically examine information will inevitably be a function of a much wider set of values, beliefs and attitudes that will be shared and shaped by the people we share our lives with.  Without considering these wider points then we are endlessly addressing the symptoms rather than the broader causes. 

Indeed, tech critic Evgeny Morozov suggests that news organisations are caught up fuelling a culture of misinformation, led by the need to drive attention, as shown in this example he gives about the Washington Post:

 “…it has recently warned about damaging Russian cyberattacks on a power grid in Vermont (in a report followed in other media outlets, including the Observer). It seems that those attacks didn’t happen and that the Washington Post didn’t even bother to check with the grid operator. Apparently, an economy ruled by online advertising has produced its own theory of truth: truth is whatever produces most eyeballs.”

This is certainly consistent with research done by the Harvard Berkman Klein Center who have challenged the notion that social media is the primary vehicle for disinformation. They analysed allegations relating to voter mail-in fraud which was a huge controversy in the 2020 US presidential election.  The researchers concluded that this issue was part of a systematic campaign amplified by a wide range of traditional media outlets. They suggested that Fox News, a right-wing television network, was more influential in spreading beliefs about voter mail-in fraud than social media.  They concluded:

“Our findings suggest that this highly effective disinformation campaign, with potentially profound effects for both participation in, and the legitimacy of, the 2020 election, was an elite-driven, mass-media led process. Social media played only a secondary role.”

On being human

To return to the start of this chapter, the narrative that we are unwitting helpless individuals without free will rather begs the question of who then managed to develop the technology that meant our minds could be hacked.  For if we truly thought that we lacked freewill then how on earth would we have the incentive to develop these sorts of tools.   It feels as if there is a sleight of hand here – some people are in a position to have their minds hacked and lack free-will, but other people are doing the hacking and logically must surely can only have free-will do this task. 

Indeed, the notion that we can be ‘hacked’ suggests that our beliefs do not come from a vacuum in which we are untouched by others.  This is a narrow individualistic notion of humans is perpetuated by a Cartesian view of humans that our rationality requires the freeing of one’s mind of any kind of external authority.  This perspective suggests that the essence of being human is the ability to separate oneself and operate in an entirely individualised manner, honing the skills of rational living. 

But, of course, this is not how we are.  We are socially embedded creatures for whom making sense of the world necessarily requires making sense of each other:  as Mary Midgley points out, our world is social.  We cannot make sense of money, parties, football and so on unless we unpick the ‘social facts’ of the situation.

The mob that stormed the US Capitol building genuinely believed the ‘social facts’ that were laid before them from a range of news media, friends, colleagues, politicians and so on.  There is a huge amount of information that means it is easy to plausibly believe democracy is at risk.  To pin this on ‘mind hacking’ is to ignore the wider reality of being human, the embedded connected way in which we absorb and share information over time.

It is all too easy to fall into the notion of a false dichotomy between digital and ‘real life’:  it makes it easier to absolve ourselves of the responsibility that this is something for which the causes are at least of our making and are our responsibility. 

The popular narrative that technology is revealing we are simple ‘hunter-gatherers’ whose minds can be hacked by smart devices is looking a little thin.  Instead by critiquing technology in this way we can see that humans are in fact intelligent socially embedded creatures.  Of course, we can get things wrong:  intelligence does not always equate to being right all the time:  but to address the systematic problems we have is not aided by downgrading human capabilities.

Published by:

colinstrong

Colin Strong is Head of Behavioural Science at Ipsos. In his role he works with a wide range of brands and public sector organisations to combine market research with behavioural science, creating new and innovative solutions to long standing strategy and policy challenges. His career has been spent largely in market research, with much of it at GfK where he was MD of the UK Technology division. As such he has a focus on consulting on the way in which technology disrupts markets, creating new challenges and opportunities but also how customer data can be used to develop new techniques for consumer insights. Colin is author of Humanizing Big Data which sets out a new agenda for the way in which more value can be leveraged from the rapidly emerging data economy. Colin is a regular speaker and writer on the philosophy and practice of consumer insight.

Categories TechnologyTags , ,