Elicitation of Expert Opinion

Name: Elicitation of Expert Opinion

[Note that this tool description was first completed for a Disease Risk Analysis tool library and is biased towards that specific application. Some minor modifications have been made to increase its relevance to the Conservation Planning Tools Library, but further work is required.]

References:  MacMillan, D. C. and Marshall, K. (2006) The Delphi process – an expert-based approach to ecological modelling in data-poor environments. Animal Conservation, 9: 11–19.

Kynn, M (2008). The ‘heuristics and biases’ bias in expert elicitation. J. R. Statist. Soc. 171 (1): 239–264

Murray N, MacDiarmid SC, Wooldridge M, Gummow B, Morley RS, Weber SE, Giovannini A, Wilson D (2004). Handbook on Import Risk Analysis for Animals and Animal Products, Volume 2. Quantitative risk assessment. OIE, Paris. 73-76.

Vose D (2000). Risk Analysis: A quantitative guide. Second edition. John Wiley, Chichester. 264-290.

Conservation planning step(s) when this would be used: This tool would be used during the Plan Actions step to predict the outcomes of different potential actions. It might also be applicable to other steps in the process where paucity of information is inhibiting planning progress.

Experience and expertise required to use the tool: A high degree of expertise is required in the elicitation of expert opinion. When quantitative inputs are derived from expert opinion, experience in appropriate use and interpretation of probability distributions is essential.
 
Data requirements: Elicitation of expert opinion is used where there is a paucity or absence of data (Vose 2000).

Cost: Completely dependent on circumstance and is likely to be high.

Case study: Gale P, Brouwer A, Ramnial V, Kelly L, Kosmider R, Fooks AR, Snary EL (2010). Assessing the impact of climate change on vector-borne viruses in the EU through the elicitation of expert opinion. Epidemiology and Infection 138. 214-225.

Gallagher E, Ryan J, Kelly L, Leforban Y, Wooldridge M (2002). Estimating the risk of importation of foot and mouth disease into Europe. Veterinary Record 150. 769-772.

Description of tool use: Elicitation and combination of expert opinion to generate inputs for a risk assessment are best conducted through a workshop approach using a modified Delphi process (Murray et al 2004).

Murray and colleagues (2004) consider that twenty is the maximum number of experts that can be managed appropriately in a workshop. The choice of experts is crucial and each should be selected impartially through a consultative process based on of their knowledge of the given subject. Experts should be selected from a variety of disciplines appropriate to the subject under consideration. It may be useful, however, to include subsidiary experts who do not necessarily have quite the same degree of expertise as core group. Subsidiary experts may provide extreme values in their estimates, which can be used to generate discussion and provide evidence of overconfidence, overestimation or underestimation. Discussion of these extreme values can be used to reduce biases and obtain more accurate estimates from the second questionnaire (see below). It may be considered that it is not appropriate to include the estimates of subsidiary experts in final analysis; such a decision should be made prior to the workshop. 

The workshop method is conducted as follows (Adapted with permission of the World Organisation for Animal Health (OIE) from Murray N, MacDiarmid SC, Wooldridge M, Gummow B, Morley RS, Weber SE, Giovannini A, Wilson D (2004). Handbook on Import Risk Analysis for Animals and Animal Products, Volume 2. Quantitative risk assessment. OIE, Paris.);

Introduction
•    Explain the background to the project and aims of the workshop.
•    Briefly introduce the discipline of risk analysis, the use of expert opinion and probability theory.
•    Explain the questions to be asked, the definitions used in the questions and the assumptions made.

Conditioning the experts
•    Explain the importance of accurate estimates, emphasising that this is an elicitation of opinion, not a test of knowledge.
•    Provide in an easily understood format any data that may be available associated with the question(s) being asked.

Questionnaire 1
•    Prior to the workshop, conduct a pilot questionnaire with a different group of individuals to insure that each question is clear and to gauge how long it will take to answer.
•    Ensure that the questionnaire is clear, easy to understand and not too long. Where possible, break the questions down into parts.
•    Allow the questionnaire to be answered individually and anonymously.
•    Ask the experts to provide estimates for the maximum and minimum values followed by a most likely value for each question. Asking for estimates in this order reduces anchoring bias.
•    Ask the experts to provide percentage estimates rather than probabilities because percentages are conceptually easier to estimate.
•    Provide aids such as computer software, graph paper or pie charts to help experts visualise percentages.
•    Allow enough time during the workshop to complete the questionnaire.

Analysis 1
•    Produce PERT (Beta-PERT) distributions to describe each expert’s uncertainty around each question using the minimum, most likely and maximum values elicited.
•    Combine the distributions from each expert regarding a particular question using a discrete distribution, appropriately weighted (if necessary) for each expert.

The discussion
•    Use a facilitator to ensure that all experts are included equally in the discussion so as to allow a free exchange of information between them.
•    Discuss the combined distribution for each question in turn.

Questionnaire 2
•    Present the questionnaire to the experts again, ideally the next day, to allow them to amend their previous answers, if they consider it appropriate.

Analysis 2
•    Analyse the answers to Questionnaire 2 as described for Questionnaire 1.
•    Depending on what was decided before the start of the workshop, answers from subsidiary experts may or may not be included.

Results 2
•    Provide the experts with preliminary results as soon as possible after the workshop and send out a validation questionnaire to ensure results are reproducible.
•    Provide the experts with the final results as soon as possible.
•    Invite feedback on the usefulness of the results and the process itself.


Strengths and weaknesses, when to use and interpret with caution: Potential sources of bias and dealing with disagreement among experts need to be considered carefully (Murray et al 2004).

Bias
A person’s estimate of a distribution’s parameters may be biased by a number of factors. People tend to:
•    weight information which comes readily to mind
•    be strongly influenced by small unrepresentative sets of data with which they are familiar.
They may;
•    be overconfident and estimate uncertainty too narrowly
•    resist changing their mind in the face of new information
•    try to influence decisions and outcomes by casting their beliefs in a particular direction
•    state their beliefs in a way that favours their own performance or status
•    knowingly suppress uncertainty in order to appear knowledgeable
•    persist in stating weakening views to simply remain consistent over time.
 
Expert disagreement
In cases of expert disagreement, it is usually best to explore the implications of the judgements of different experts separately to determine whether substantially different conclusions are likely. If the conclusions are not significantly affected, one can conclude that the results are robust despite the disagreement among experts. In some cases, experts may not disagree about the body of knowledge; rather, they may draw different inferences from an agreed body of knowledge. In such cases one needs to make a judgement about which expert is more authoritative for the problem under scrutiny.

Choice of probability distribution
The PERT (Beta-PERT) distribution is used most commonly when eliciting quantitative estimates from experts (see Gallagher et al 2002) although other distributions such as the uniform, general, cumulative or discrete may sometimes be used (Murray et al 2004, Vose 2000). The uniform distribution is used in situations where experts are unable to propose a ‘most likely’ value but will propose a minimum and a maximum value. However, the uniform distribution is a very poor modeller of expert opinion and should be avoided if possible. It is very unlikely that an expert will be able to define a maximum and minimum value but have no opinion on a most likely value (Vose 2000). Individual PERT (Beta-PERT) distributions elicited from each expert are combined in a discrete distribution to produce the input value for each variable in the risk assessment model (Vose 2000, Gallagher et al 2002).

 

Back to Abruzzi Table 1.
___________________________________________________________________________
Contributor(s) name: Stuart C MacDiarmid [original author for the DRA Tools Library].
Affiliation: New Zealand Ministry of Agriculture and Forestry
Email: stuart.macdiarmid@maf.govt.nz       
Date: 4 August 2011
[Modifications relating to conservation planning were taken from the Abruzzi workshop data.]