The Problem with Expertise

Expertise has something of a problem at the moment.  Ever since UK politician Michael Gove famously suggested that ‘the country has had quite enough of experts’, there has been a period of self-reflection by experts.  The Bank of England’s chief economist has suggested his profession is in crisis because it failed to predict the 2008 financial crash and misjudged the impact of the Brexit vote on the economy.[i]   And the general public appears to have its concerns, with a recent survey by Edelman suggesting that trust in our institutions (arguably staffed by experts) is at an all-time low. [ii]

Perhaps it is no bad thing that we have a healthy cynicism of experts.  There is surely a fine line between deferring to an expert and obedient compliance to authority.  But do experts really deserve the low esteem in which they now often appear to be held? And are we in danger of ‘solutionism’, thinking we can magic up easy answers to long-standing complex issues?  Is Jared Kushner, a novice in global geopolitics, really better paced to find peace in the middle east than all the policy experts before him?[iii]

The case against experts

To try and understand this, we first need to look at the case against experts.  At one level, there is plenty of evidence that expertise does have a problem.  Behavioural science has looked closely at expert decision making and often found it wanting.  So, for example, 85% of radiologists managed to miss the picture of a man in a gorilla suit shaking his fist when looking through a series of slides that they typically examine when looking for evidence of cancerous nodules. [iv] This is known in ‘unintentional blindness’; we get so used to looking for certain things that we fail to spot an anomaly which is outside our usual frame of reference.

In another famous study, Shai Danziger of Ben-Gurion University of the Negev and his colleagues [v] followed eight Israeli judges for ten months and recorded their rulings on over 1,000 applications made by prisoners to parole boards. The defendants were asking either to be allowed out on parole or for the conditions of their imprisonment changed. It transpired that at the start of the day, the judges granted around two-thirds of the applications before them. But as the day went by, that number fell significantly, eventually reaching zero. The startling finding was that leniency returned after each of two daily breaks, during which the judges retired for refreshments. The approval rate then went back-up close to its original value, before falling again as the day went on.  So it would appear the judges, in this instance at least, were not as objective and thorough as one might hope

There is a huge amount of other information which suggests that experts are far from fallible. Indeed, a study by Tetlock found that experts only had a relatively minor advantage over lay people in predicting political and economic outcomes.  Despite there being a minority of ‘superforecasters’ concluded that the average ‘expert’ was in fact “roughly as accurate as a dart-throwing chimpanzee”.[vi]

Is technology making expertise redundant?

Added to this is the way in which technology appears to have much to offer in terms of ability to rapidly process a great deal of information and reach objective conclusions.  Richard and Daniel Susskind [vii] contend that the dominance of white-collar professionals, once the bastions of expertise, are being threatened by technology which is routinizing, decomposing and disintermediating work such that it requires much less expert analysis.  We may need experts to simplify tasks in the first place but we then have a much-reduced need for these very same experts as solutions subsequently deployed at large.

Experts also no longer have a gatekeeper role regarding information.  The much wider availability of data and access to computing power means that it is harder to create materials. A good example is Bain and Co’s ‘million-dollar slide’, a single image once considered so insightful for a client that was estimated to be worth about $1m in consulting fees. [viii]  There is now no end of data now available for any of us to use from a wide variety of sources such that it is not difficult to quickly make oneself well informed.  For example, The Sloan Digital Sky Survey has been making available maps of the skies since 2008 and now boasts a publication record of 230 million celestial objects.  The amateur astronomer has never been so well informed.

And technology now means that solutions to long-standing problems can be crowd-sourced, rather than relying on a small cadre of experts.  This has created a market for contests, open to all-comers.  For example, a contest by the Oil Spill Recovery Institute (with a prize of $20,000) obtained an innovative solution from a chemist who did not know too much about oil but knew a lot about cement – which was an elegant solution that had eluded oil industry experts. [ix]

And of course, we are seeing the rise of Machine Learning and Artificial Intelligence.  For example, IBM Watson, developed by IBM, startled the world when it was able to beat all human competitors in the word association game Jeopardy.  And we are starting to see the anticipation of widespread adoption of computing, if not to replace experts, then to significantly augment their capabilities.  For example, in India, IBM has partnered with Manipal Hospitals to provide diagnosis and treatment to cancer patients. Watson for Oncology is used across 16 facilities and academic centres of the hospital where more than 200,00 patients are treated each year.[x]

Given these sorts of developments, there is no end of excited discussion of the death of expertise.  Even if we still respect the skill of the expert, perhaps it is in danger of rapidly being made redundant.

But is it that simple?

And yet, for many working in roles where they may be considered to be experts, these criticisms do not always seem to tell the whole story of the day to day reality of the nature of their jobs.

The part of the debate that is currently missing is what being an expert actually entails.  Because it is easy to assume that we know what being an expert is about but fail to relate this to the daily experience of experts’ working lives.  In some ways this is hard to do, not least because much of our practice becomes ingrained and tacit, we do not often have the time or indeed inclination to reflect on the nature of what we do.

But if we do take the trouble to step back and think about terms a little, we can start to see that being an expert is not necessarily only about efficient processing of information and ensuring accurate predictions.  Donald Schon wrote about practitioner skills in his classic book ‘The reflective practitioner’,[xi] two points from which are relevant here.  First, experts are not just involved in problem-solving but problem-setting.  Much of the time expert practitioners are presented with a fairly vague challenge.  ‘What can I do about my declining sales?’ is a classic example.  A significant part of the work which ensues is identifying what is the question that really needs to be addressed.  Where are sales declining, what are competitors doing, are there issues with your product etc?  Much thought, consideration and discussion then follow to establish the class of problem and how it can be addressed.

Second, practitioner experts will often deal with unique situations.   So the question may be obvious – sales are declining because of a new competitor offering a better product.  But it is not always readily apparent why consumers find it more appealing.  Particularly when there are no other similar products that might help us to understand it.  So we are thrown back on using our judgment to determine what the principles may be used.  Maybe we need to explore branding and advertising, maybe pricing and promotions, possibly the product formulation.  All of these and more could be responsible but we do not have the luxury of being able to develop and empirically sound conclusions.

Third, if we require experts to always be right, then we are creating a false standard.  It is most effective if we are able to be right most of the time.  Indeed, our brains seem to be wired for ‘induction’ where we generalise from a small number of cases to draw broad conclusions.  By doing this we can operate quickly and efficiently and know that we are ‘probably’ right.[xii]  The alternative is a painstaking review of all the data – and even then we cannot be sure we have it all covered.  But just as this is our strength then there is also a weakness – if we are probably right then there is always a possibility that we can be wrong.  But it is hard to see what a viable alternative is to this.

Conclusions

There is a danger that we are seeing expertise in quite a limited way.  We increasingly expect our experts to be very thorough and exact in their predictions.  But surely this is a limited and reductionist view of what is involved in with the expertise.

Indeed, we are in danger of applying the standards that we have of technology to experts.  We do not ask technology to set the parameters for an issue.  We do not ask technology to deal with instances which are unique and unusual.  The criticisms made of experts seem to be those where technology excels such as thoroughness and consistency.

There has, however, been a tendency by many professions to expect a non-questioning deference to their authority.   So it can only be a good thing that we are now more likely to question experts, to see ourselves as having the potential for expertise on a topic.  Experts, in turn, have to offer humility.  There is nothing to be gained from appearing infallible and therefore unchallengeable.

But we also need to respect the work of experts which is often complex, nuanced and tacit. It can be difficult to identify and articulate these capabilities – which are very human in nature.  And surely we demand something different our experts than that which we expect of technology.  Each has their role.  We need to recognise, articulate and defend their relative skills, not suggest that one has primacy over the other.

By Colin Strong

 


[i] See https://www.theguardian.com/business/2017/jan/05/chief-economist-of-bank-of-england-admits-errors

[ii] See http://www.edelman.com/trust2017/

[iii] See https://www.nytimes.com/2017/06/19/us/politics/jared-kushner-mideast-diplomacy.html

[iv] Drew, Trafton,  L.-H. Võ, Melissa, Wolfe, Jeremy M. (2013) The Invisible Gorilla Strikes Again

Sustained Inattentional Blindness in Expert Observers. Psychological Science. Volume: 24 issue: 9, page(s): 1848-1853

[v] Danziger, S., Levav, J., & Avnaim-Pesso, L. (2011a). Extraneous factors in judicial decisions. Proceedings of the National Academy of Sciences, 108(17), 6889–6892. http://dx.doi.org/10.1073/pnas.1018033108.

[vi] Tetlock, Philip E. and Gardner, Dan (2015) Superforecasting: The Art and Science of Prediction. Crown

[vii] Susskind, Richard and Susskind, Daniel (2015). The Future of the Professions: How Technology Will Transform the Work of Human Experts. OUP Oxford.

[viii] Kiechel, Walter, The Lords of Strategy (Boston: Harvard Business Press, 2010).

[ix] Weinberger, David, Too Big to Know: Rethinking Knowledge Now That the Facts Aren’t the Facts, Experts Are Everywhere, and the Smartest Person in the Room Is the Room, Basic Books. 2012.

[x] See https://www.forbes.com/sites/janakirammsv/2017/01/03/how-ibm-and-microsoft-are-disrupting-the-healthcare-industry-with-cognitive-computing/#4eb31d021a92

[xi] Schon, Donald (1991) The Reflective Practitioner: How Professionals Think in Action. Routledge

[xii] Schulz, Kathryn (2011) Being Wrong: Adventures in the Margin of Error: The Meaning of Error in an Age of Certainty. Portobello Books