Editing and imputation

Skip to filters. View results.

Filter results by

Search Help
Currently selected filters that can be removed

Keyword(s)

Survey or statistical program

3 facets displayed. 0 facets selected.

Content

1 facets displayed. 0 facets selected.
Sort Help
entries

Results

All (98)

All (98) (0 to 10 of 98 results)

  • Articles and reports: 12-001-X202500200007
    Description: Although probability samples have been regarded as the gold standard to collect information for population-based study, non-probability samples have been used frequently in practice due to low cost, convenience, and the lack of the sampling frame for the survey. Naïve estimates based on non-probability samples without any adjustments may be misleading due to selection bias. Recently, a valid data integration approach that includes mass imputation, propensity score weighting, and calibration has been used to improve the representativeness of non-probability samples. The effectiveness of the mass imputation approach depends on the underlying model assumptions. In this paper, we propose using deep learning for the mass imputation in the combining of probability and non-probability samples and compare it with several modern machine learning-based mass imputation approaches, including generalized additive modeling, regression tree, random forest, and XG-boosting. In the simulation study, deep learning-based approaches have been shown to be more robust and effective than other mass imputation approaches against the failure of underlying model assumptions under non-linearity scenarios.
    Release date: 2025-12-23

  • Articles and reports: 11-522-X202500100025
    Description: National statistical offices have increasingly adopted machine learning (ML) for its potential to improve survey estimates. ML techniques offer significant advantages, notably the ability to manage high-dimensional data and to capture complex, nonlinear relationships, thereby enhancing the overall quality of survey statistics. In this article, following the approach of Chernozhukov et al. (2018), we describe a double debiased machine learning framework that enables valid statistical inference when imputed estimators are derived from ML procedures. Simulation results suggest that the proposed framework performs well in a wide range of scenarios.
    Release date: 2025-09-08

  • Articles and reports: 11-522-X202500100034
    Description: Until now, detailed data on the destination of manufacturing sales have not historically been available to Canadians. Through integration of annual survey data, a destination of sales table by industry and province of origin was developed for the annual and monthly manufacturing surveys at Statistics Canada. Respondents for the annual survey are asked for their distribution of sales as a percentage across 15 destinations. To tackle the difficulty of generating an establishment-level distribution for multi-province respondents, three approaches were compared: using the respondents' total distribution for all their establishments, using optimization, and using the distributions of the single-province respondents. The imputed distribution of destination sales from the annual data was then applied to the monthly survey's sales value. This paper delves into challenges faced with imputing the destination sales (especially for respondents with establishments in multiple provinces), ensuring sales match marginal origin province totals, and allocating a distribution of destinations based on data from the annual program to the monthly estimates.
    Release date: 2025-09-08

  • Articles and reports: 11-522-X202500100035
    Description: Historically, the Canadian census of population Edit and Imputation (E&I) process has operated using a nearest-neighbour donor imputation methodology wherein the distance between a failed unit and a potential donor is obtained through a weighted combination of auxiliary variables. Revision to the model between cycles can be a complicated and time-consuming process given there is no standard approach to variable selection and weighting between topics. This paper will illustrate the potential of the Relief variable selection algorithm to create a machine learning-driven approach to variable selection and weighting that is standardized and comparable between census cycles and among the many topics of the census. An overview on how this process may be applied in practice will be presented, followed by results on several topics that indicate a general improvement over previous methods.
    Release date: 2025-09-08

  • Articles and reports: 12-001-X202500100004
    Description: Survey data collection often is plagued by unit and item nonresponse. To reduce reliance on strong assumptions about the missingness mechanisms, statisticians can use information about population marginal distributions known, for example, from censuses or administrative databases. One approach that does so is the Missing Data with Auxiliary Margins, or MD-AM, framework, which uses multiple imputation for both unit and item nonresponse so that survey-weighted estimates accord with the known marginal distributions. However, this framework relies on specifying and estimating a joint distribution for the survey data and nonresponse indicators, which can be computationally and practically daunting in data with many variables of mixed types. We propose two adaptations to the MD-AM framework to simplify the imputation task. First, rather than specifying a joint model for unit respondents’ data, we use random hot deck imputation while still leveraging the known marginal distributions. Second, instead of sampling from conditional distributions implied by the joint model for the missing data due to item nonresponse, we apply multiple imputation by chained equations for item nonresponse before imputation for unit nonresponse. Using simulation studies with nonignorable missingness mechanisms, we demonstrate that the proposed approach can provide more accurate point and interval estimates than models that do not leverage the auxiliary information. We illustrate the approach using data on voter turnout from the U.S. Current Population Survey.
    Release date: 2025-06-30

  • Articles and reports: 12-001-X202200200009
    Description:

    Multiple imputation (MI) is a popular approach for dealing with missing data arising from non-response in sample surveys. Multiple imputation by chained equations (MICE) is one of the most widely used MI algorithms for multivariate data, but it lacks theoretical foundation and is computationally intensive. Recently, missing data imputation methods based on deep learning models have been developed with encouraging results in small studies. However, there has been limited research on evaluating their performance in realistic settings compared to MICE, particularly in big surveys. We conduct extensive simulation studies based on a subsample of the American Community Survey to compare the repeated sampling properties of four machine learning based MI methods: MICE with classification trees, MICE with random forests, generative adversarial imputation networks, and multiple imputation using denoising autoencoders. We find the deep learning imputation methods are superior to MICE in terms of computational time. However, with the default choice of hyperparameters in the common software packages, MICE with classification trees consistently outperforms, often by a large margin, the deep learning imputation methods in terms of bias, mean squared error, and coverage under a range of realistic settings.

    Release date: 2022-12-15

  • Articles and reports: 12-001-X202200100008
    Description:

    The Multiple Imputation of Latent Classes (MILC) method combines multiple imputation and latent class analysis to correct for misclassification in combined datasets. Furthermore, MILC generates a multiply imputed dataset which can be used to estimate different statistics in a straightforward manner, ensuring that uncertainty due to misclassification is incorporated when estimating the total variance. In this paper, it is investigated how the MILC method can be adjusted to be applied for census purposes. More specifically, it is investigated how the MILC method deals with a finite and complete population register, how the MILC method can simultaneously correct misclassification in multiple latent variables and how multiple edit restrictions can be incorporated. A simulation study shows that the MILC method is in general able to reproduce cell frequencies in both low- and high-dimensional tables with low amounts of bias. In addition, variance can also be estimated appropriately, although variance is overestimated when cell frequencies are small.

    Release date: 2022-06-21

  • Articles and reports: 12-001-X202100100004
    Description: Multiple data sources are becoming increasingly available for statistical analyses in the era of big data. As an important example in finite-population inference, we consider an imputation approach to combining data from a probability survey and big found data. We focus on the case when the study variable is observed in the big data only, but the other auxiliary variables are commonly observed in both data. Unlike the usual imputation for missing data analysis, we create imputed values for all units in the probability sample. Such mass imputation is attractive in the context of survey data integration (Kim and Rao, 2012). We extend mass imputation as a tool for data integration of survey data and big non-survey data. The mass imputation methods and their statistical properties are presented. The matching estimator of Rivers (2007) is also covered as a special case. Variance estimation with mass-imputed data is discussed. The simulation results demonstrate the proposed estimators outperform existing competitors in terms of robustness and efficiency.
    Release date: 2021-06-24

  • Articles and reports: 12-001-X202100100009
    Description: Predictive mean matching is a commonly used imputation procedure for addressing the problem of item nonresponse in surveys. The customary approach relies upon the specification of a single outcome regression model. In this note, we propose a novel predictive mean matching procedure that allows the user to specify multiple outcome regression models. The resulting estimator is multiply robust in the sense that it remains consistent if one of the specified outcome regression models is correctly specified. The results from a simulation study suggest that the proposed method performs well in terms of bias and efficiency.
    Release date: 2021-06-24

  • 19-22-0004
    Description: One of the main objectives of statistics is to distill data into information which can be summarized and easily understood. Data visualizations, which include graphs and charts, are powerful ways of doing so. The purpose of this information session is to provide examples of common graphs and charts, highlight practical advice to help the audience choose the right display for their data, and identify what to avoid and why. An overall objective is to build capacity and increase understanding of fundamental techniques which foster accurate and effective dissemination of statistics and research findings.

    https://www.statcan.gc.ca/en/wtc/information/19220004
    Release date: 2020-10-30
Data (0)

Data (0) (0 results)

No content available at this time.

Analysis (90)

Analysis (90) (0 to 10 of 90 results)

  • Articles and reports: 12-001-X202500200007
    Description: Although probability samples have been regarded as the gold standard to collect information for population-based study, non-probability samples have been used frequently in practice due to low cost, convenience, and the lack of the sampling frame for the survey. Naïve estimates based on non-probability samples without any adjustments may be misleading due to selection bias. Recently, a valid data integration approach that includes mass imputation, propensity score weighting, and calibration has been used to improve the representativeness of non-probability samples. The effectiveness of the mass imputation approach depends on the underlying model assumptions. In this paper, we propose using deep learning for the mass imputation in the combining of probability and non-probability samples and compare it with several modern machine learning-based mass imputation approaches, including generalized additive modeling, regression tree, random forest, and XG-boosting. In the simulation study, deep learning-based approaches have been shown to be more robust and effective than other mass imputation approaches against the failure of underlying model assumptions under non-linearity scenarios.
    Release date: 2025-12-23

  • Articles and reports: 11-522-X202500100025
    Description: National statistical offices have increasingly adopted machine learning (ML) for its potential to improve survey estimates. ML techniques offer significant advantages, notably the ability to manage high-dimensional data and to capture complex, nonlinear relationships, thereby enhancing the overall quality of survey statistics. In this article, following the approach of Chernozhukov et al. (2018), we describe a double debiased machine learning framework that enables valid statistical inference when imputed estimators are derived from ML procedures. Simulation results suggest that the proposed framework performs well in a wide range of scenarios.
    Release date: 2025-09-08

  • Articles and reports: 11-522-X202500100034
    Description: Until now, detailed data on the destination of manufacturing sales have not historically been available to Canadians. Through integration of annual survey data, a destination of sales table by industry and province of origin was developed for the annual and monthly manufacturing surveys at Statistics Canada. Respondents for the annual survey are asked for their distribution of sales as a percentage across 15 destinations. To tackle the difficulty of generating an establishment-level distribution for multi-province respondents, three approaches were compared: using the respondents' total distribution for all their establishments, using optimization, and using the distributions of the single-province respondents. The imputed distribution of destination sales from the annual data was then applied to the monthly survey's sales value. This paper delves into challenges faced with imputing the destination sales (especially for respondents with establishments in multiple provinces), ensuring sales match marginal origin province totals, and allocating a distribution of destinations based on data from the annual program to the monthly estimates.
    Release date: 2025-09-08

  • Articles and reports: 11-522-X202500100035
    Description: Historically, the Canadian census of population Edit and Imputation (E&I) process has operated using a nearest-neighbour donor imputation methodology wherein the distance between a failed unit and a potential donor is obtained through a weighted combination of auxiliary variables. Revision to the model between cycles can be a complicated and time-consuming process given there is no standard approach to variable selection and weighting between topics. This paper will illustrate the potential of the Relief variable selection algorithm to create a machine learning-driven approach to variable selection and weighting that is standardized and comparable between census cycles and among the many topics of the census. An overview on how this process may be applied in practice will be presented, followed by results on several topics that indicate a general improvement over previous methods.
    Release date: 2025-09-08

  • Articles and reports: 12-001-X202500100004
    Description: Survey data collection often is plagued by unit and item nonresponse. To reduce reliance on strong assumptions about the missingness mechanisms, statisticians can use information about population marginal distributions known, for example, from censuses or administrative databases. One approach that does so is the Missing Data with Auxiliary Margins, or MD-AM, framework, which uses multiple imputation for both unit and item nonresponse so that survey-weighted estimates accord with the known marginal distributions. However, this framework relies on specifying and estimating a joint distribution for the survey data and nonresponse indicators, which can be computationally and practically daunting in data with many variables of mixed types. We propose two adaptations to the MD-AM framework to simplify the imputation task. First, rather than specifying a joint model for unit respondents’ data, we use random hot deck imputation while still leveraging the known marginal distributions. Second, instead of sampling from conditional distributions implied by the joint model for the missing data due to item nonresponse, we apply multiple imputation by chained equations for item nonresponse before imputation for unit nonresponse. Using simulation studies with nonignorable missingness mechanisms, we demonstrate that the proposed approach can provide more accurate point and interval estimates than models that do not leverage the auxiliary information. We illustrate the approach using data on voter turnout from the U.S. Current Population Survey.
    Release date: 2025-06-30

  • Articles and reports: 12-001-X202200200009
    Description:

    Multiple imputation (MI) is a popular approach for dealing with missing data arising from non-response in sample surveys. Multiple imputation by chained equations (MICE) is one of the most widely used MI algorithms for multivariate data, but it lacks theoretical foundation and is computationally intensive. Recently, missing data imputation methods based on deep learning models have been developed with encouraging results in small studies. However, there has been limited research on evaluating their performance in realistic settings compared to MICE, particularly in big surveys. We conduct extensive simulation studies based on a subsample of the American Community Survey to compare the repeated sampling properties of four machine learning based MI methods: MICE with classification trees, MICE with random forests, generative adversarial imputation networks, and multiple imputation using denoising autoencoders. We find the deep learning imputation methods are superior to MICE in terms of computational time. However, with the default choice of hyperparameters in the common software packages, MICE with classification trees consistently outperforms, often by a large margin, the deep learning imputation methods in terms of bias, mean squared error, and coverage under a range of realistic settings.

    Release date: 2022-12-15

  • Articles and reports: 12-001-X202200100008
    Description:

    The Multiple Imputation of Latent Classes (MILC) method combines multiple imputation and latent class analysis to correct for misclassification in combined datasets. Furthermore, MILC generates a multiply imputed dataset which can be used to estimate different statistics in a straightforward manner, ensuring that uncertainty due to misclassification is incorporated when estimating the total variance. In this paper, it is investigated how the MILC method can be adjusted to be applied for census purposes. More specifically, it is investigated how the MILC method deals with a finite and complete population register, how the MILC method can simultaneously correct misclassification in multiple latent variables and how multiple edit restrictions can be incorporated. A simulation study shows that the MILC method is in general able to reproduce cell frequencies in both low- and high-dimensional tables with low amounts of bias. In addition, variance can also be estimated appropriately, although variance is overestimated when cell frequencies are small.

    Release date: 2022-06-21

  • Articles and reports: 12-001-X202100100004
    Description: Multiple data sources are becoming increasingly available for statistical analyses in the era of big data. As an important example in finite-population inference, we consider an imputation approach to combining data from a probability survey and big found data. We focus on the case when the study variable is observed in the big data only, but the other auxiliary variables are commonly observed in both data. Unlike the usual imputation for missing data analysis, we create imputed values for all units in the probability sample. Such mass imputation is attractive in the context of survey data integration (Kim and Rao, 2012). We extend mass imputation as a tool for data integration of survey data and big non-survey data. The mass imputation methods and their statistical properties are presented. The matching estimator of Rivers (2007) is also covered as a special case. Variance estimation with mass-imputed data is discussed. The simulation results demonstrate the proposed estimators outperform existing competitors in terms of robustness and efficiency.
    Release date: 2021-06-24

  • Articles and reports: 12-001-X202100100009
    Description: Predictive mean matching is a commonly used imputation procedure for addressing the problem of item nonresponse in surveys. The customary approach relies upon the specification of a single outcome regression model. In this note, we propose a novel predictive mean matching procedure that allows the user to specify multiple outcome regression models. The resulting estimator is multiply robust in the sense that it remains consistent if one of the specified outcome regression models is correctly specified. The results from a simulation study suggest that the proposed method performs well in terms of bias and efficiency.
    Release date: 2021-06-24

  • Articles and reports: 12-001-X202000100006
    Description:

    In surveys, logical boundaries among variables or among waves of surveys make imputation of missing values complicated. We propose a new regression-based multiple imputation method to deal with survey nonresponses with two-sided logical boundaries. This imputation method automatically satisfies the boundary conditions without an additional acceptance/rejection procedure and utilizes the boundary information to derive an imputed value and to determine the suitability of the imputed value. Simulation results show that our new imputation method outperforms the existing imputation methods for both mean and quantile estimations regardless of missing rates, error distributions, and missing-mechanisms. We apply our method to impute the self-reported variable “years of smoking” in successive health screenings of Koreans.

    Release date: 2020-06-30
Reference (7)

Reference (7) ((7 results))

  • Surveys and statistical programs – Documentation: 71F0031X2005002
    Description:

    This paper introduces and explains modifications made to the Labour Force Survey estimates in January 2005. Some of these modifications include the adjustment of all LFS estimates to reflect population counts based on the 2001 Census, updates to industry and occupation classification systems and sample redesign changes.

    Release date: 2005-01-26

  • Surveys and statistical programs – Documentation: 92-397-X
    Description:

    This report covers concepts and definitions, the imputation method and data quality for this variable. The 2001 Census collected information on three types of unpaid work performed during the week preceding the Census: looking after children, housework and caring for seniors. The 2001 data on unpaid work are compared with the 1996 Census data and with the data from the General Social Survey (use of time in 1998). The report also includes historical tables.

    Release date: 2005-01-11

  • Surveys and statistical programs – Documentation: 92-388-X
    Description:

    This report contains basic conceptual and data quality information to help users interpret and make use of census occupation data. It gives an overview of the collection, coding (to the 2001 National Occupational Classification), edit and imputation of the occupation data from the 2001 Census. The report describes procedural changes between the 2001 and earlier censuses, and provides an analysis of the quality level of the 2001 Census occupation data. Finally, it details the revision of the 1991 Standard Occupational Classification used in the 1991 and 1996 Censuses to the 2001 National Occupational Classification for Statistics used in 2001. The historical comparability of data coded to the two classifications is discussed. Appendices to the report include a table showing historical data for the 1991, 1996 and 2001 Censuses.

    Release date: 2004-07-15

  • Surveys and statistical programs – Documentation: 92-398-X
    Description:

    This report contains basic conceptual and data quality information intended to facilitate the use and interpretation of census class of worker data. It provides an overview of the class of worker processing cycle including elements such as regional office processing, and edit and imputation. The report concludes with summary tables that indicate the level of data quality in the 2001 Census class of worker data.

    Release date: 2004-04-22

  • Surveys and statistical programs – Documentation: 85-602-X
    Description:

    The purpose of this report is to provide an overview of existing methods and techniques making use of personal identifiers to support record linkage. Record linkage can be loosely defined as a methodology for manipulating and / or transforming personal identifiers from individual data records from one or more operational databases and subsequently attempting to match these personal identifiers to create a composite record about an individual. Record linkage is not intended to uniquely identify individuals for operational purposes; however, it does provide probabilistic matches of varying degrees of reliability for use in statistical reporting. Techniques employed in record linkage may also be of use for investigative purposes to help narrow the field of search against existing databases when some form of personal identification information exists.

    Release date: 2000-12-05

  • Surveys and statistical programs – Documentation: 75F0002M1998012
    Description:

    This paper looks at the work of the task force responsible for reviewing Statistics Canada's household and family income statistics programs, and at one of associated program changes, namely, the integration of two major sources of annual income data in Canada, the Survey of Consumer Finances (SCF) and the Survey of Labour and Income Dynamics (SLID).

    Release date: 1998-12-30

  • Surveys and statistical programs – Documentation: 75F0002M1997006
    Description:

    This report documents the edit and imputation approach taken in processing Wave 1 income data from the Survey of Labour and Income Dynamics (SLID).

    Release date: 1997-12-31