Population Projections for Canada (2018 to 2068), Provinces and Territories (2018 to 2043): Technical Report on Methodology and Assumptions
Chapter 2: 2018 Survey of Experts on Future Demographic Trends

Warning View the most recent version.

Archived Content

Information identified as archived is provided for reference, research or recordkeeping purposes. It is not subject to the Government of Canada Web Standards and has not been altered or updated since it was archived. Please "contact us" to request a format other than those available.

by Patrice Dion, Nora Galbraith and Elham Sirag

Skip to text

Text begins


In the previous edition of Statistics Canada’s demographic projections (2013 to 2063), a formal consultation of expert Canadian demographers was undertaken to obtain their views on future demographic trends (Bohnert 2015). Although Statistics Canada already engaged in some external consultations on projection assumptions and methods, this more rigorous, formalized consultation process was done in an effort to improve the plausibility and credibility of the projection assumptions.

The approach adopted at the time was modeled largely on the techniques recently used by a number of other statistical agencies; principally, the British Office for National Statistics (Shaw 2008). A questionnaire was distributed to experts which asked them to provide a plausible range of quantitative estimates of future indicators of fertility, mortality and migration according to a specified level of confidence, as well as qualitative justifications for those numeric estimates.

The first venture into a formal surveying of Canadian demographers proved to be a fruitful exercise for Statistics Canada’s Demographic Projections Program. The quantitative estimates received in the survey directly informed the assumption-building process. Equally important were the many arguments, methodologies and other evidence that were provided by experts via the open-ended comment sections of the survey. Given the observed benefits of the process to the credibility of the projections, it was determined that formal expert elicitation should continue in future editions of the demographic projections.

For the present edition of the projections, new goals were associated with the expert elicitation exercise. Principally, we wanted to use the elicitation exercise to improve how we characterize uncertainty in the projections; specifically, we sought to obtain complete probability distributions from experts regarding various future demographic indicators. Soliciting probabilistic information from survey respondents is a complex undertaking, and so we examined in greater depth the science of expert elicitation, a large and mature field in itself. The key findings of this review are described below, followed by a description of the new elicitation protocol devised for the current series of the demographic projections.

What is expert elicitation, and why use it?

Formal expert elicitation can be considered a structured approach to consulting experts on a subject for which there is insufficient knowledge or uncertainty is great (Knol et al. 2010). It is a way to gather and synthesize the knowledge and wisdom of experts. In the context of modeling uncertain events, elicitation can be used to translate someone’s judgment about those uncertain events into something that can be usefully modeled (Gosling 2014).

Eliciting the views of experts is often the only viable option when a decision must be made in the absence of empirical data, or when the required data is limited, unreliable or prohibitively expensive (James et al. 2010; Runge et al. 2011). However, expert elicitation should not be viewed as a mere “last resort”, as it offers several benefits. When properly structured and documented, expert elicitation provides greater transparency in addressing uncertainties than other conventional statistical techniques (Knol et al. 2010). It is also relatively quick and inexpensive to obtain (Knol et al. 2010; Gosling 2014). Further, in some cases, expert elicitation may be preferable over other methods such as time series extrapolation if it takes into consideration additional factors other than what has previously been observed. This possibility is particularly appealing for the projection of demographic phenomena, which often display ambiguous historical trends (in the case of fertility) and are highly affected by social, political or economic developments that can be very difficult to predict based purely on historical trends (in the case of immigration) (Lutz et al. 1998).

Producing population projections—whether deterministic or probabilistic— inevitably requires the producer to make numerous subjective judgment calls. Thus, ‘expert opinion’ is always utilized in the development of population projections. Typically, projection assumptions are developed by a small team of individuals who examine historical and recent trends and debate emerging issues. In this context, tendencies towards groupthink can arise. By expanding and diversifying the number of individuals involved in gathering expert opinion, including experts working in various capacities within the broader field of demography, the assumption-building process becomes more rigorous and transparent. It is our belief that the use of a formal expert elicitation improves the credibility of resulting projection assumptions.

While offering many benefits, the practice of eliciting expert views is also subject to numerous challenges, mostly related to the limitations of human estimation abilities. While expert judgments are valuable, being that they come from humans, they are susceptible to numerous heuristic devices that can create biases which impact both the reliability and validity of their estimations (Tversky and Kahneman 1974; Morgan and Henrion 1990; Hoffman et al. 1995; Kynn 2008; Lutz et al. 1998; Martin et al. 2011). The main types of bias that can hinder expert elicitation are anchoring bias (estimates are anchored to a natural starting point or benchmark and as a result, don’t vary much from it) and availability bias (the estimated probability of an event is based on familiarity/cognitive availability rather than objective frequency). Numerous other factors can also impact elicitation exercises, including plain misinterpretation on the part of the respondent and phenomena as ephemeral as the respondent’s mood or level of fatigue at the time of the elicitation (Grigore et al. 2017; Knol et al. 2010; Runge et al. 2011). Motivational biases, such as the desire to maintain or elevate their public image through the display of high confidence, can also arise among individuals considered to be leading experts in their field (Runge et al. 2011).

Furthermore, it has been found that humans, whether expert or not, have difficulty estimating probability, or constructing probability distributions, a task often requested of them in expert elicitation (Morgan and Henrion 1990; Lee 1998; Kynn 2008; Garthwaite et al. 2005). Lutz et al. (1998) and Lee (1998) both question whether experts can reasonably be expected to distinguish between values demarking the 85th versus 95th percentile, for example, as they are often called upon to do in elicitation exercises, noting that such estimations create a false sense of precision. Experts also tend to be overconfident in their judgments, regardless of elicitation technique, which can result in a systematic underestimation of uncertainty (Martin et al. 2011; Burgman et al. 2006; Speirs-Bridge et al. 2010; Goldstein and Rothschild 2014).

In sum, the use of expert elicitation is not without its challenges. The judgments of experts may be biased, unreliable, poorly calibrated or poorly communicated. As a result, expert elicitation should not be considered a precise science (Gosling 2014). Nevertheless, building projections requires that some form of expert judgment be made—whether on the choice of method or the assumptions about each projection component. The use of expert elicitation, when used with knowledge of its limitations, permits an enriched assumption-building process, particularly with regard to the treatment of uncertainty. Expert elicitation also provides a means of explicitly formalizing the contribution of experts in the assumption-building process.

New context, new goals for our elicitation protocol

Designing an elicitation protocol requires a balancing act: on one hand, we want to provide a pleasant, relatively undemanding elicitation experience for the respondent; on the other hand, we want to ensure that we capture his or her true belief to the greatest extent possible (Sperber et al. 2013). As seen in the preceding section, there has been much research completed on the challenges associated with expert elicitation. There have also been numerous studies completed on the best methods to counter or minimize those challenges. Readers can find comprehensive reviews of these topics in Garthwaite et al. (2005), O’Hagan et al. (2006) and Dias et al. (2018). Following our review of the science of expert elicitation, the new 2018 Survey of Experts on Future Demographic Trends elicitation protocol implements many best practices in elicitation and several methodological innovations, permitting us to attain several new goals. These key goals for the 2018 elicitation exercise were the following.

Improved expression of uncertainty

In comparison to eliciting a single point estimate, obtaining complete probability distributions from experts allows for an expression of the uncertainty about the parameter of interest (Morris et al. 2014). Recent methodological innovations have facilitated the process of creating flexible probability density functions using only a small number of parameters elicited from experts. These innovations open up the possibility to achieve more accurate and coherent portrayals of respondent judgments about the future and could be used for the production of probabilistic demographic projections in the future (Lutz et al. 1998; Lutz and Scherbov 1998; Tuljapurkar et al. 2004).

Improved approach to the aggregation of individual responses components

After eliciting the views of numerous experts, it is necessary to combine their views in some manner. A benefit of using probability distributions to quantify experts’ beliefs is that, for each demographic component, the individual distributions can be combined in such a way that the aggregate result is also a probability distribution. This provides a statistically coherent expression of aggregate beliefs and, within a deterministic framework, allows for the use of different summary statistics – such as select quantiles of the aggregate distribution, for example – to derive projection assumptions. Such a technique can also be utilized within a probabilistic framework, since a single representative probability distribution is required for each of the components.

Visual feedback via a graphical user interface

It has been suggested that providing visual representations of experts’ quantitative judgments can greatly aid in the elicitation process, allowing the expert to assess, confirm or revise their judgments if desired, thus improving their calibration and accuracy (Garthwaite et al. 2005; Kynn 2008; Speirs-Bridge et al. 2010; Morgan 2013; Goldstein and Rothschild 2014). It was therefore a priority to design the elicitation tool in a manner so as to provide the respondent an interactive graphical user interface with which they could visually assess how their estimates translated into a probability distribution. The interface is flexible enough to accommodate different types of distributions (for instance, left-or right-skewed, bounded or unbounded), allowing respondents to provide nuanced responses.

More user-friendly remote elicitation experience

In the 2013-based elicitation exercise, a survey was distributed remotely to experts in the form of a modifiable Adobe PDF document. This procedure brought challenges: some respondents did not have the required software, others had outdated versions that did not perform in the anticipated way.

For the new edition, we investigated the potential of a number of web-based survey tools, but none could incorporate the specifications we required for the elicitation tool. Instead, it was determined that the design of a MS Excel spreadsheet-based tool offered numerous benefits: the software is widely used by respondents and is easily modifiable to our custom requirements including the incorporation of a graphical user interface and the acceptance of both textual and numerical inputs.

These principal goals, combined with our current knowledge of best practices in elicitation, guided the design of the 2018 expert elicitation protocol, described in the following section.

Elicitation protocol

Our new elicitation protocol is inspired principally by the spreadsheet-based remote elicitation tools developed by Sperber et al. (2013) and Grigore et al. (2017). It also profits greatly from the recent development of the Metalog algorithm by Keelin (2016, 2018) which permitted us to create a graphical user interface with which the expert could review how their parameter estimates translate into a flexible continuous probability distribution.

Introduction to the survey

On opening the survey, respondents were first provided with a brief description of the task at hand (Figure 2.1). The difficulty/impossibility of predicting the future is acknowledged, and respondents are encouraged to embrace and communicate their level of uncertainty. Respondents are also instructed to skip the sections related to demographic components for which they feel they do not have sufficient expertise (Morgan and Henrion 1990; Lee 1998; Lutz et al. 1998). Finally, respondents are encouraged to contact us in the event that they had any questions or issues in completing the survey.

Data table for Figure 2.1

Description for Figure 2.1

This image shows a screenshot from the 2018 Survey of Experts on Future Demographic Trends. The screenshot contains the “Introduction” section of the survey, which includes an overview of the survey content, some specific instructions to respondents and contact information for assistance.

Following the introduction to the survey, respondents are asked several questions about their background: years of experience; level of expertise in fertility, mortality, international migration and demographic projections; and current domain of work (Figure 2.2). This information is collected for two purposes: firstly, to assess whether the group of respondents is suitably diverse (as recommended by Morgan and Henrion 1990 and Aspinall 2010, among others); and secondly, the information is used for the purpose of weighting responses during aggregation, described in more detail later.

Data table for Figure 2.2

Description for Figure 2.2

This image shows a screenshot from the 2018 Survey of Experts on Future Demographic Trends. The screenshot contains the “Your Background” section of the survey, which includes three questions: 1) How many years of experience do you have in the general field of demography or population studies?; 2) How would you rate your level of expertise in each of the following fields (fertility, mortality, international migration, demographic projections); 3) Currently, what is your main domain of work?

Eliciting probability distributions for indicators of fertility, mortality and immigration in Canada in 2043

The main part of the survey consists of the elicitation of qualitative arguments and quantitative estimates regarding the evolution of selected indicators of fertility (period total fertility rate), mortality (life expectancy at birth for males and for females) and immigration (number of immigrants per thousand population) in Canada in 2043.Note  We describe the process using the fertility component as an example. Figure 2.3 provides a screen shot from the tool, showing the various steps of the elicitation procedure for the fertility section.

Data table for Figure 2.3

Description for Figure 2.3

This image shows a screenshot from the 2018 Survey of Experts on Future Demographic Trends. The screenshot contains the “Fertility” section of the survey, which includes 4 steps: Step 1 – Arguments; Step 2 – Elicitation; Step 3 – Review Inputs; Step 4 – Overall Assessment.

In Step 1, we ask for qualitative arguments that are likely to influence the future path of the total fertility rate in Canada between now and 2043. For this task, experts can consult a series of tables and figures showing historical trends for various fertility indicators. Experts are invited to think about a variety of possible future scenarios (increase, decrease, status quo) when formulating their arguments. Besides providing critical information for putting into context their later quantitative estimates, this procedure is recommended as it encourages experts to think about the substantive details of their judgements and consider a whole range of possibilities, thus reducing potential overconfidence (Morgan and Henrion 1990; Kadane and Wolfson 1998; Garthwaite et al. 2005; Kynn 2008).

Step 2 is modeled in large part by the step-based procedures utilized by Speirs-Bridge et al. (2010), Sperber et al. (2013) and Grigore et al. (2017).

  • In Step 2(a), experts were first asked to provide the lower and higher bounds of a range covering nearly all plausibleNote  values of the period total fertility rate in Canada in 2043. This is an intentional practice used to minimize potential overconfidence (Speirs-Bridge et al. 2010; Sperber et al. 2013; Oakley and O’Hagan 2016; Grigore et al. 2017; Hanea et al. 2018). Indeed, asking experts to first provide a single central estimate such as a mean or a median tends to could trigger anchoring to that value in subsequent responses.
  • In Step 2(b), experts were then asked to report how confident they are that the true value will fall within the range they just specified. Allowing experts to determine their own level of confidence has been found to reduce overconfidence in comparison with asking them to identify the low and high bounds of an interval to some pre-determined confidence level (Speirs-Bridget et al. 2010). That said, we impose the restriction that the respondent must choose a confidence level of at least 90% or higher; experts are asked to revise their range if they are confident at a level less than 90%.Note 
  • In Step 2(c), experts are asked to estimate the median value of the plausible range they provided in Step 2(a), so that they expect an equal (50-50) chance that the true value lies above or below the median.
  • In Step 2(d), the range of values between the lower bound and the median is split in two segments of equal length and the same is done for values between the median and the upper bound. The respondent is then asked to assign to each segment the probability that the true value falls within each of these segments. Note that each half below and above the median has by definition 50% probability of occurrence, so it is a matter of redistributing that 50% to each segment.Note 

Throughout Step 2, several “checks”, in the form of pop-up warning signs, were built into the elicitation tool in order to prevent illogical inputs in various forms (Figure 2.4 provides an example in the case where a respondent provides an upper bound estimate that is lower than their lower bound estimate). This is following best practices in remote elicitation (Sperber et al. 2013; Grigore et al. 2017).

Data table for Figure 2.4

Description for Figure 2.4

This image shows a screenshot from the 2018 Survey of Experts on Future Demographic Trends. The screenshot contains an example of one of the warning messages respondents may receive while completing the survey. The warning message reads “Upper bound must be higher than lower bound”.

Moving next to a key and innovative feature of our protocol: in Step 3, respondents are provided with a visual representation of the parameter estimates they provided in Step 2, in the form of a histogram and probability density function.

While there have been several tools developed for the purpose of providing experts with visual representations of their estimates (see Sperber et al. 2013; Morris et al. 2014; Grigore et al. 2017), none of these existing tools fit our exact requirements, which were the following:

  • The visualization can be implemented within MS Excel (in a remote elicitation setting);
  • The visualization appears to the expert instantaneously upon entering their parameter estimates;
  • The visualization is flexible enough to work with bounded, unbounded or semi-bounded probability distributions, left or right-skewed or otherwise ‘irregular’ distributional shapes.

Fortunately, Keelin (2016, 2018) recently developed a very flexible algorithm which permits the calculation of probability density functions for many possible parameter combinations (as an example, see the PDF displayed in Figure 2.2 corresponding to a fictitious respondent’s parameter estimates and confidence level inputs for the total fertility rate in Canada in 2043).

The metalog distribution – short for “meta-logistic” – belongs to the larger class of Quantile-Parameterized Distributions (QPDs) developed by Keelin and Powley (2011), and refers to any continuous probability distribution that can be fully parameterized in terms of its quantiles. The appeal of using QPDs in modeling uncertainty is that modifications can be made to their quantile functions (through the addition of extra shape parameters, for example), enabling them to represent a broader range of beliefs. The quantile function of the logistic distribution (a QPD) is an example of one that can be modified with relative ease due to it being linear in its parameters.

The “meta” in metalog is a term used by Keelin to describe distributions whose original parameters have been substituted in order to incorporate a greater number of shape parameters. In theory, there is no limit to the number of shape parameters the metalog distribution can have, meaning it can be used to model distributional characteristics such as right- or left-skewness, varying levels of kurtosis, and multi-modality. However, the inclusion of additional shape parameters requires the elicitation of a greater number of quantiles. The procedure described in Step 2 is designed to elicit five quantiles, enabling the algorithm to fit unbounded metalog distributions with up to a maximum of five shape parameters. In the event that experts’ inputs describe a semi-bounded or bounded distribution, log- or logit-transforms are applied to the metalog quantile function, respectively, in order to restrict its range accordingly.

Despite being highly flexible, there can be instances where our version of the metalog algorithm (having a maximum of 5 shape parameters) is unable to compute a probability density function given the inputs provided. This can occur for example if an expert envisions a largely bimodal probability density function. For this reason, a rudimentary histogram is also presented to the expert which, despite not showing the tails of the distribution, still reflects in a crude manner the inputs of experts, allowing them to recognize any possible mistake they may have made, or possible biases they may have fell into. When a probability density function cannot be computed, experts are informed and instructed to go to the next step if they nevertheless feel comfortable with their inputs.Note 

Once experts have reviewed the graphed densities and are satisfied with their inputs, they are invited to comment on the results in Step 4. They are also asked to indicate to what extent the resulting PDF represents an accurate description of their beliefs (i.e., very accurate, good, poor). Lastly, experts who answered that the visualization of the results did not provide a coherent representation of their beliefs are asked to provide further explanation.

End of survey

At the end of the survey, experts are asked to confirm whether they would like their names to be acknowledged in future Statistics Canada’s demographic projections products, while maintaining anonymity in their individual responses (Figure 2.5). This ‘limited anonymity’ has been found to be important in limiting any possible motivational biases and permitting respondents to be as unconstrained as possible in their responses (Knol et al. 2010; Morgan 2013). Finally, experts are encouraged to comment on their experience with the elicitation, as well as to provide any general comments or suggestions regarding Statistics Canada’s demographic projections program. Allowing the expert to give feedback on the elicitation exercise increases the chances that their knowledge and views are captured accurately (Gosling 2014; Runge et al. 2011; Martin et al. 2011).

Data table for Figure 2.5

Description for Figure 2.5

This image shows a screenshot from the 2018 Survey of Experts on Future Demographic Trends. The screenshot contains the “End of Survey” section of the survey. Respondents are reminded that their individual comments and estimates will not be identifiable in any published materials. Respondents are asked “May we publish your name in the forthcoming edition of the Demographic Projections?” Finally, respondents are encouraged to comment on their experience with the elicitation exercise.

Target respondents and method of recruitment

With the assistance of the council members of Canada’s two demography associations, an email invitation to participate in the 2018 Survey of Experts on Future Demographic Trends was sent to all members of the Canadian Population Society (CPS) and l’Association des démographes du Québec (ADQ). Remote elicitation greatly improves the number of potential respondents that can be accessed while keeping costs minimal. Additional personal email invitations were sent to a number of individuals with well-known expertise in fertility, mortality, immigration or demographic projections.

In the context of an elicitation on the topic of Canadian demography—a very small field of academic discipline, narrowed further by the fact that we were asking specifically about the future, requiring some level of familiarity with demographic projections—experts are a fairly scarce resource. That said, it has been suggested that in fact only a small number of experts—around six to fifteen—are required to obtain robust elicitation results, beyond which results should not change substantially (Hogarth 1978; Aspinall 2010; Knol et al. 2010). In total we received 18 responses to the 2018 elicitation exercise. The characteristics of the respondents are described in the next section.

Respondent characteristics

The 18 respondents to the 2018 Survey of Experts on Future Demographic Trends were found to represent a fairly well-balanced mix of expertise (Figure 2.6), general years of experience in the field (Figure 2.7), and current domain of work (Figure 2.8). The majority of respondents (10 out of 18) reported having high levels of expertise in demographic projections.

Data table for Figure 2.6 Number of respondents for each section of the 2018 Survey of Experts on Future Demographic Trends, according to their self-rated level of expertise in that specific field

Description for Figure 2.6 
Data table for Figure 2.6
Number of respondents for each section of the 2018 Survey of Experts on Future Demographic Trends, according to their self-rated level of expertise in each specific field
Table summary
This table displays the results of Number of respondents for each section of the 2018 Survey of Experts on Future Demographic Trends. The information is grouped by Self-rated level of expertise (appearing as row headers), Fertility, Mortality and Immigration, calculated using number units of measure (appearing as column headers).
Self-rated level of expertise Fertility Mortality Immigration
Low 2 2 0
Intermediate 11 4 6
High 4 4 8

Data table for Figure 2.7 Years of experience in the field of demography among respondents to the 2018 Survey of Experts on Future Demographic Trends

Description for Figure 2.7 
Data table for Figure 2.7
Years of experience in the field of demography among respondents to the 2018 Survey of Experts on Future Demographic Trends
Table summary
This table displays the results of Years of experience in the field of demography among respondents to the 2018 Survey of Experts on Future Demographic Trends. The information is grouped by Years of experience (appearing as row headers), Number of respondents (appearing as column headers).
Years of experience Number of respondents
5 to 9 years 1
10 to 14 years 5
15 to 24 years 4
25 years or more 8

Data table for Figure 2.8 Current domain of work of respondents to the 2018 Survey of Experts on Future Demographic Trends

Description for Figure 2.8 
Data table for Figure 2.8
Current domain of work of respondents to the 2018 Survey of Experts on Future Demographic Trends
Table summary
This table displays the results of Current domain of work of respondents to the 2018 Survey of Experts on Future Demographic Trends. The information is grouped by Current domain of work (appearing as row headers), Number of respondents (appearing as column headers).
Current domain of work Number of respondents
Academia 8
Government 6
Non-profit research / Policy 1
Private sector 1
Retired 2
Other 0

By and large, respondents reporting low or no expertise in a given component elected to skip the questions relating to that component, as was expected. While nearly all respondents completed the section on fertility, response rates were lower for the sections on immigration and particularly mortality where there were a relatively low number of respondents with self-rated high expertise (Figure 2.6).

Aggregating the individual responses

The choice of aggregation method was made with the goal of capturing as much information as possible from the experts’ individual beliefs (i.e. avoiding a “compromise”), while ensuring that the aggregate result is itself a valid probability distribution from which relevant summary statistics – such as the mean, median, and quantiles –can be derived. For this reason, we adopted a mixture model approach (referred to as a “linear opinion pool” when applied to the context of expert elicitation) in which the aggregate distribution for each component can be thought of as a weighted average of the individual expert distributions.

The most commonly adopted weighting scheme in mixture models is an equal-weights scheme. The aggregate distribution is then simply the arithmetic mean of the component distributions. Another commonly used scheme is to weight expert distributions based on some additional criteria, such as subject matter expertise. A scheme that assigns greater relative weight to distributions belonging to experts with more expertise has an intuitive appeal, especially in the context where we solicit a large number of experts in demography with varying levels of expertise in the areas of fertility, mortality, immigration. Moreover, some respondents considering themselves as having a low level of expertise may have accepted to answer the survey confident that they warned us and that we will take this information into consideration.

While there are good reasons to believe that experts reporting higher levels of expertise are likely to exert better judgement, past experimentations show that this is not true under all circumstances (Morgan and Henrion 1990; Tetlock 2005; Martin et al. 2011; Sperber et al. 2013). This pattern may reflect the fact that respondents familiar with the considerable challenges and ‘inaccuracies’ inherent to the field of demographic projections may be less confident in their ability to accurately assess the future evolution of Canada’s fertility, mortality and immigration, and therefore be less prone to be overconfident.

We chose therefore to implement an additive weighting scheme that assigns weights to expert distributions based on their expertise in each of the three components and population projections, as well as their years of experience in the general field of demography. It is described as follows:

  1. Experts are assigned partial weights between 1 and 4 based on whether their self-rated expertise level in the areas of fertility, mortality, and immigration is reported as “none,” “low,” “intermediate,” or “high”;Note 
  2. Experts are then assigned a similar partial weight (between 1 and 4) based on their self-rated expertise in the area of demographic projections;
  3. Experts are also assigned a partial weight between 1 and 5 based on their self-reported years of experience in demography or population studies: less than 5 years, 5 to 9 years, 10 to 14 years, 15 to 24 years, or 25 or more years;
  4. For each component, the partial weights in (1)-(3) are then summed (and normalized) to derive the final weights assigned to each expert in the three mixture distributions.

Despite the fact that experts’ responses are parametrized by metalog distributions,Note  the resulting mixture distributions for fertility, mortality, and immigration are not metalog distributions, and do not belong to any defined parametric family. Characteristics such as central moments and quantiles are derived using empirical methods.

Summaries of the survey responses and mixture distributions for each of the three components are provided in their respective chapters of this report.


We would like to express our deep gratitude to the following experts who graciously volunteered their time to complete the 2018 Survey of Experts on Future Demographic Trends: Roderic Beaujot, Alain Bélanger, Julien Bérard-Chagnon, Éric Caron Malenfant, Gustave Goldmann, Michael Haan, Daniel Heibert, Nan Li, Rachel Margolis, Guillaume Marois, Jean-Dominque Morency, François Nault, Doug Norris, François Pelletier, Étienne Poulin, Claudine Provencher and Luc Roy, as well as one anonymous expert.

We would also like to sincerely thank Michael Haan, President of the Canadian Population Society, Benoît Laplante, President of l’Association des démographes du Québec, and Laurent Martel, Director, Demography Division, Statistics Canada, for facilitating the recruitment of expert respondents, as well as numerous colleagues within the Demography Division who assisted us in the testing of the tool.


Aspinall, W. 2010. “A route to more tractable expert advice”, Nature, volume 463, pages 294 to 295.

Bohnert, N. 2015. “Chapter 2: Opinion Survey on Future Demographic Trends”, in Bohnert, N., J. Chagnon, S. Coulombe, P. Dion and L. Martel. 2015. Population Projections for Canada (2013 to 2063), Provinces and Territories (2013 to 2038): Technical Report on Methodology and Assumptions, Statistics Canada catalogue no. 91-620-X.

Burgman, M., F. Fidler, M. McBride, T. Walshe and B. Wintle. 2006. Eliciting expert judgment: Literature review, Australian Center of Excellence for Risk Analysis, round 1, project 11.

Dias, L., A. Morton and J. Quigley (editors). 2018. Elicitation: the science and art of structuring judgment, Springer, New York.

Garthwaite, P.H., J.B. Kadane and A. O’Hagan. 2005. “Statistical methods for eliciting probability distributions”, Technical Paper 1-2005, Carnegie Mellon University Research Showcase.

Goldstein, D.G. and D. Rothschild. 2014. “Lay understanding of probability distributions”, Judgment and Decision Making, volume 9, issue 1, pages 1 to 14.

Gosling, J.P. 2014. Methods for eliciting expert opinion to inform health technology assessment, Medical Research Council, UK.

Grigore, B., J. Peters, C. Hyde and K. Stein. 2017. “Explicit: A feasibility study of remote expert elicitation in health technology assessment”, BMC Medical Informatics and Decision Making, volume 17, issue 131.

Hanea, A.M., M. Burgman and V. Hemming. 2018. “Chapter 5: IDEA for Uncertainty Quantification”, in Dias, L., A. Morton and J. Quigley (editors). 2018. Elicitation: the science and art of structuring judgment, Springer, New York.

Hoffman, R.R., N.R. Shadbolt, M.A. Burton and G. Klein. 1995. “Eliciting knowledge from experts: A methodological analysis”, Organizational Behavior and Human Decision Processes, volume 62, issue 2, pages 129 to 158.

Hogarth, R.M. 1978. “A note on aggregating opinions”, Organizational Behavior and Human Performance, volume 21, issue 1, pages 40 to 46.

James, A., S. Low Choy and K Mengersen. 2010. “Elicitator: an expert elicitation tool for regression in ecology”, Environmental Modelling & Software, volume 25, issue 1, pages 129 to 145.

Kadane, J. and L.J. Wolfson. 1998. “Experiences in Elicitation”, Journal of the Royal Statistical Society: Series D (The Statistician), volume 47, issue 1, pages 3 to 19.

Keelin, T.W. 2016. “The Metalog Distributions”, Decision Analysis, volume 13, issue 4.

Keelin, T.W. 2018. The Metalog Distributions – Excel workbook, http://www.metalogdistributions.com/excelworkbooks.html.

Keelin, T.W. and B.W. Powley. 2011. “Quantile-parameterized distributions”, Decision Analysis, volume 8, issue 3, pages 206 to 219.

Knol, A.B., P. Slottje, J.P. van der Sluijs and E. Lebret. 2010. “The use of expert elicitation in environmental health impact assessment: a seven step procedure”, Environmental Health, volume 9, issue 19.

Kynn, M. 2008. “The ‘heuristics and biases’ bias in expert elicitation”, Journal of the Royal Statistical Society, Series A (Statistics in Society), volume 171, issue 1, pages 239 to 264.

Lee, R.D. 1998. “Probabilistic approaches to population forecasting”, Population and Development Review, volume 24, pages 156 to 190.

Lutz, W., W.C. Sanderson and S. Scherbov. 1998. “Expert-based probabilistic population projections”, Population and Development Review, volume 24, pages 139 to 155.

Lutz, W. and S. Scherbov. 1998. “An expert-based framework for probabilistic national population projections: The example of Austria”, European Journal of Population, volume 14, pages 1 to 17.

Martin, T.G., M.A. Burgman, F. Fidler, P.M. Kuhnert, S. Low-Choy, M. McBride and K. Mengersen. 2011. “Eliciting expert knowledge in conservation science”, Conservation Biology, volume 26, issue 1, pages 29 to 38.

Morgan, M.G. and M. Henrion. 1990. Uncertainty: A Guide to Dealing with Uncertainty in Quantitative Risk and Policy Analysis, Cambridge University Press.

Morgan, M.G. 2013. “Use (and abuse) of expert elicitation in support of decision making for public policy”, PNAS Perspective, volume 111, issue 20, pages 7,176 to 7,184.

Morris, D.E., J.E. Oakley and J.A. Crowe. 2014. “A web-based tool for eliciting probability distributions from experts”, Environmental Modelling and Software, volume 52, pages 1 to 4.

Oakley, T. and A. O’Hagan. 2016. Shelf: the Sheffield elicitation framework (version 3.0), accessible at: http://www.tonyohagan.co.uk/shelf/SHELF3.html.

O’Hagan, A., C.E. Buck, A. Daneshkhah, J.R. Eiser, P.H. Garthwaite, D.J. Jenkinson, J.E. Oakley and T. Rakow. 2006. Uncertain Judgements: Eliciting Experts' Probabilities, Chichester, Wiley.

Parducci, A. 1963. “The Range-Frequency Compromise in Judgment”, Psychological Monographs, volume 77, issue 2, pages 1 to 50.

Runge, M.C., S.J. Converse and J.E. Lyons. 2011. “Which uncertainty? Using expert elicitation and expected value information to design an adaptive program”, Biological Conservation, volume 144, pages 1,214 to 1,223.

Shaw, C. 2008. “The National Population Projections Expert Advisory Group: Results from a questionnaire about future trends in fertility, mortality and migration”, Population Trends, volume 134, pages 42 to 53.

Sperber, D., D. Mortimer, P. Lorgelly and D. Berlowitz. 2013. “An expert on every street corner? Methods for eliciting distributions in geographically dispersed opinion pools”, Value in Health, volume 16, issue 2, pages 434 to 437.

Speirs-Bridge, A., F. Fidler, M. McBride, L. Flander, G. Cumming and M. Burgman. 2010. “Reducing overconfidence in the interval judgments of experts”, Risk Analysis, volume 30, issue 3, pages 512 to 523.

Tetlock, P.E. 2005. Expert Political Judgment: How Good Is It? How Can We Know?, Princeton University Press.

Tuljapurkar, S., R.D. Lee and Q. Li. 2004. “Random scenario forecasts versus stochastic forecasts”, International Statistical Review, volume 72, issue 2, pages 185 to 199.

Tversky, A. and D. Kahneman. 1974. “Judgment under uncertainty: heuristics and biases”, Science, volume 185, issue 4,157, pages 1,124 to 1,131.

Date modified: