Statistical techniques
Filter results by
Search HelpCurrently selected filters that can be removed
Keyword(s)
Type
4 facets displayed. 0 facets selected.
Survey or statistical program
24 facets displayed. 0 facets selected.
- Labour Force Survey (5)
- Canadian Income Survey (3)
- Canadian Community Health Survey - Annual Component (2)
- Gross Domestic Product by Industry - National (Monthly) (1)
- Monthly Oil and Other Liquid Petroleum Products Pipeline Survey (1)
- Annual Electricity Supply and Disposition Survey (1)
- Survey of Employment, Payrolls and Hours (1)
- Survey of Financial Security (1)
- Monthly Passenger Bus and Urban Transit Survey (1)
- Stock and Consumption of Fixed Non-residential Capital (1)
- Tuition and Living Accommodation Costs (1)
- Vital Statistics - Death Database (1)
- Uniform Crime Reporting Survey (1)
- Survey of Household Spending (1)
- Households and the Environment Survey (1)
- Census of Population (1)
- Annual Income Estimates for Census Families and Individuals (T1 Family File) (1)
- Biennial Drinking Water Plants Survey (1)
- Gross Domestic Expenditures on Research and Development (1)
- Survey of Safety in Public and Private Spaces (1)
- Canadian Housing Survey (1)
- Survey on Early Learning and Child Care Arrangements (SELCCA) (1)
- Canadian Perspectives Survey Series (CPSS) (1)
- Labour Market Indicators (1)
Results
All (33)
All (33) (0 to 10 of 33 results)
- Journals and periodicals: 11-633-XDescription: Papers in this series provide background discussions of the methods used to develop data for economic, health, and social analytical studies at Statistics Canada. They are intended to provide readers with information on the statistical methods, standards and definitions used to develop databases for research purposes. All papers in this series have undergone peer and institutional review to ensure that they conform to Statistics Canada's mandate and adhere to generally accepted standards of good professional practice.Release date: 2026-04-24
- Articles and reports: 12-001-X202500200001Description: Nested error regression models are commonly used to incorporate unit specific auxiliary variables to improve small area estimates. When the mean structure of the model is misspecified, the design-based mean squared prediction error (MSPE) of Empirical Best Linear Unbiased Predictors (EBLUP) generally increases. The Observed Best Prediction (OBP) method has been proposed with the intent to improve on the design-based MSPE over EBLUP. In this paper, we conduct a Monte Carlo simulation experiments to understand the effect of misspsecification of mean structures on different small area estimators. Our findings suggest that the OBP using unit-level auxiliary variables does not outperform the EBLUP in terms of design-based MSPE, unless the number of small areas m is extremely large. Conversely, the performance of OBP significantly improves when area-level auxiliary variables are employed. This paper includes both analytical and numerical evidence to demonstrate these observations, providing practical insights for addressing model misspecification in small area estimation (SAE).Release date: 2025-12-23
- Articles and reports: 12-001-X202500200007Description: Although probability samples have been regarded as the gold standard to collect information for population-based study, non-probability samples have been used frequently in practice due to low cost, convenience, and the lack of the sampling frame for the survey. Naïve estimates based on non-probability samples without any adjustments may be misleading due to selection bias. Recently, a valid data integration approach that includes mass imputation, propensity score weighting, and calibration has been used to improve the representativeness of non-probability samples. The effectiveness of the mass imputation approach depends on the underlying model assumptions. In this paper, we propose using deep learning for the mass imputation in the combining of probability and non-probability samples and compare it with several modern machine learning-based mass imputation approaches, including generalized additive modeling, regression tree, random forest, and XG-boosting. In the simulation study, deep learning-based approaches have been shown to be more robust and effective than other mass imputation approaches against the failure of underlying model assumptions under non-linearity scenarios.Release date: 2025-12-23
- Articles and reports: 12-001-X202500200008Description: Classical design-based survey estimation relies on a properly specified sampling design for valid inference. We consider the properties of regression estimation under a misspecified sample design, in which the nominal and true inclusion probabilities do not necessarily match. This general misspecified sample design setting encompasses many challenges in the modern survey environment. Under this setting, an asymptotic analysis of the regression estimator, an expression of the bias, and an expression of the variance are presented. Further, a consistent variance estimator is derived and an expression which estimates the bias in-part or in-whole is discussed. This later expression may be used as an indicator of the presence of bias due to misspecification by a practitioner. A simulation study is conducted to support the presented theory.Release date: 2025-12-23
- Articles and reports: 18-001-X2025001Description: This paper brings the analysis of business cluster to a more granular geographic scale by developing a methodology for identifying business clusters at the neighborhood level. The proposed method identifies clusters of businesses at the DB level, which is one of the most granular spatial units of analysis defined by Statistics Canada. The method is developed with an application to four census metropolitan areas (CMAs) of different sizes and for different industry cluster specifications, including simple 2-digit North American Industry Classification System (NAICS) groups as well as industry clusters resulting from groupings of NAICS codes, as defined by Delgado et al. (2014).Release date: 2025-10-10
- Articles and reports: 12-001-X202500100004Description: Survey data collection often is plagued by unit and item nonresponse. To reduce reliance on strong assumptions about the missingness mechanisms, statisticians can use information about population marginal distributions known, for example, from censuses or administrative databases. One approach that does so is the Missing Data with Auxiliary Margins, or MD-AM, framework, which uses multiple imputation for both unit and item nonresponse so that survey-weighted estimates accord with the known marginal distributions. However, this framework relies on specifying and estimating a joint distribution for the survey data and nonresponse indicators, which can be computationally and practically daunting in data with many variables of mixed types. We propose two adaptations to the MD-AM framework to simplify the imputation task. First, rather than specifying a joint model for unit respondents’ data, we use random hot deck imputation while still leveraging the known marginal distributions. Second, instead of sampling from conditional distributions implied by the joint model for the missing data due to item nonresponse, we apply multiple imputation by chained equations for item nonresponse before imputation for unit nonresponse. Using simulation studies with nonignorable missingness mechanisms, we demonstrate that the proposed approach can provide more accurate point and interval estimates than models that do not leverage the auxiliary information. We illustrate the approach using data on voter turnout from the U.S. Current Population Survey.Release date: 2025-06-30
- Articles and reports: 12-001-X202500100013Description: This discussion of the paper by Rao and Lohr focuses on the use of machine learning procedures for estimating finite population parameters. While there is growing interest in these methods within national statistical offices, several areas remain largely unexplored and warrant significant attention in the coming years. In this discussion, I highlight potential topics for future research and development in this rapidly evolving field.Release date: 2025-06-30
- Articles and reports: 12-001-X202400200001Description: Cochran’s rule states that a standard (Wald) two-sided 95% confidence interval around a sample mean drawn from a population with positive skewness is reasonable when the sample size is greater than 25 times the square of the skewness coefficient of the population. We investigate whether a variant of this crude rule applies for a proportion estimated from a stratified simple random sample.Release date: 2024-12-20
- Articles and reports: 12-001-X202400200007Description: The capture-recapture method can be applied to measure the coverage of administrative and big data sources, in official statistics. In its basic form, it involves the linkage of two sources while assuming a perfect linkage and other standard assumptions. In practice, linkage errors arise and are a potential source of bias, where the linkage is based on quasi-identifiers. These errors include false positives and false negatives, where the former arise when linking a pair of records from different units, and the latter arise when not linking a pair of records from the same unit. So far, the existing solutions have resorted to costly clerical reviews, or they have made the restrictive conditional independence assumption. In this work, these requirements are relaxed by modeling the number of links from a record instead. The same approach may be taken to estimate the linkage accuracy without clerical reviews, when linking two sources that each have some undercoverage.Release date: 2024-12-20
- 10. Design-based estimation of small and empty domains in survey data analysis using order constraintsArticles and reports: 12-001-X202400200010Description: Recent work in survey domain estimation has shown that incorporating a priori assumptions about orderings of population domain means reduces the variance of the estimators and provides smaller confidence intervals with good coverage. Here we show how partial ordering assumptions allow design-based estimation of sample means in domains for which the sample size is zero, with conservative variance estimates and confidence intervals. Order restrictions can also substantially improve estimation and inference in small-size domains. Examples with well-known survey data sets demonstrate the utility of the methods. Code to implement the examples using the R package csurvey is given in the appendix.Release date: 2024-12-20
Data (1)
Data (1) ((1 result))
- Table: 11-10-0074-01Geography: Census tractFrequency: OccasionalDescription:
The divergence index (D-index) describes the degree that families with different income levels are mixing together in neighbourhoods. It compares neighbourhood (census tract, CT) discrete income distributions to a base distribution, which is the income quintiles of the neighbourhood’s census metropolitan area (CMA).
Release date: 2020-06-22
Analysis (31)
Analysis (31) (0 to 10 of 31 results)
- Journals and periodicals: 11-633-XDescription: Papers in this series provide background discussions of the methods used to develop data for economic, health, and social analytical studies at Statistics Canada. They are intended to provide readers with information on the statistical methods, standards and definitions used to develop databases for research purposes. All papers in this series have undergone peer and institutional review to ensure that they conform to Statistics Canada's mandate and adhere to generally accepted standards of good professional practice.Release date: 2026-04-24
- Articles and reports: 12-001-X202500200001Description: Nested error regression models are commonly used to incorporate unit specific auxiliary variables to improve small area estimates. When the mean structure of the model is misspecified, the design-based mean squared prediction error (MSPE) of Empirical Best Linear Unbiased Predictors (EBLUP) generally increases. The Observed Best Prediction (OBP) method has been proposed with the intent to improve on the design-based MSPE over EBLUP. In this paper, we conduct a Monte Carlo simulation experiments to understand the effect of misspsecification of mean structures on different small area estimators. Our findings suggest that the OBP using unit-level auxiliary variables does not outperform the EBLUP in terms of design-based MSPE, unless the number of small areas m is extremely large. Conversely, the performance of OBP significantly improves when area-level auxiliary variables are employed. This paper includes both analytical and numerical evidence to demonstrate these observations, providing practical insights for addressing model misspecification in small area estimation (SAE).Release date: 2025-12-23
- Articles and reports: 12-001-X202500200007Description: Although probability samples have been regarded as the gold standard to collect information for population-based study, non-probability samples have been used frequently in practice due to low cost, convenience, and the lack of the sampling frame for the survey. Naïve estimates based on non-probability samples without any adjustments may be misleading due to selection bias. Recently, a valid data integration approach that includes mass imputation, propensity score weighting, and calibration has been used to improve the representativeness of non-probability samples. The effectiveness of the mass imputation approach depends on the underlying model assumptions. In this paper, we propose using deep learning for the mass imputation in the combining of probability and non-probability samples and compare it with several modern machine learning-based mass imputation approaches, including generalized additive modeling, regression tree, random forest, and XG-boosting. In the simulation study, deep learning-based approaches have been shown to be more robust and effective than other mass imputation approaches against the failure of underlying model assumptions under non-linearity scenarios.Release date: 2025-12-23
- Articles and reports: 12-001-X202500200008Description: Classical design-based survey estimation relies on a properly specified sampling design for valid inference. We consider the properties of regression estimation under a misspecified sample design, in which the nominal and true inclusion probabilities do not necessarily match. This general misspecified sample design setting encompasses many challenges in the modern survey environment. Under this setting, an asymptotic analysis of the regression estimator, an expression of the bias, and an expression of the variance are presented. Further, a consistent variance estimator is derived and an expression which estimates the bias in-part or in-whole is discussed. This later expression may be used as an indicator of the presence of bias due to misspecification by a practitioner. A simulation study is conducted to support the presented theory.Release date: 2025-12-23
- Articles and reports: 18-001-X2025001Description: This paper brings the analysis of business cluster to a more granular geographic scale by developing a methodology for identifying business clusters at the neighborhood level. The proposed method identifies clusters of businesses at the DB level, which is one of the most granular spatial units of analysis defined by Statistics Canada. The method is developed with an application to four census metropolitan areas (CMAs) of different sizes and for different industry cluster specifications, including simple 2-digit North American Industry Classification System (NAICS) groups as well as industry clusters resulting from groupings of NAICS codes, as defined by Delgado et al. (2014).Release date: 2025-10-10
- Articles and reports: 12-001-X202500100004Description: Survey data collection often is plagued by unit and item nonresponse. To reduce reliance on strong assumptions about the missingness mechanisms, statisticians can use information about population marginal distributions known, for example, from censuses or administrative databases. One approach that does so is the Missing Data with Auxiliary Margins, or MD-AM, framework, which uses multiple imputation for both unit and item nonresponse so that survey-weighted estimates accord with the known marginal distributions. However, this framework relies on specifying and estimating a joint distribution for the survey data and nonresponse indicators, which can be computationally and practically daunting in data with many variables of mixed types. We propose two adaptations to the MD-AM framework to simplify the imputation task. First, rather than specifying a joint model for unit respondents’ data, we use random hot deck imputation while still leveraging the known marginal distributions. Second, instead of sampling from conditional distributions implied by the joint model for the missing data due to item nonresponse, we apply multiple imputation by chained equations for item nonresponse before imputation for unit nonresponse. Using simulation studies with nonignorable missingness mechanisms, we demonstrate that the proposed approach can provide more accurate point and interval estimates than models that do not leverage the auxiliary information. We illustrate the approach using data on voter turnout from the U.S. Current Population Survey.Release date: 2025-06-30
- Articles and reports: 12-001-X202500100013Description: This discussion of the paper by Rao and Lohr focuses on the use of machine learning procedures for estimating finite population parameters. While there is growing interest in these methods within national statistical offices, several areas remain largely unexplored and warrant significant attention in the coming years. In this discussion, I highlight potential topics for future research and development in this rapidly evolving field.Release date: 2025-06-30
- Articles and reports: 12-001-X202400200001Description: Cochran’s rule states that a standard (Wald) two-sided 95% confidence interval around a sample mean drawn from a population with positive skewness is reasonable when the sample size is greater than 25 times the square of the skewness coefficient of the population. We investigate whether a variant of this crude rule applies for a proportion estimated from a stratified simple random sample.Release date: 2024-12-20
- Articles and reports: 12-001-X202400200007Description: The capture-recapture method can be applied to measure the coverage of administrative and big data sources, in official statistics. In its basic form, it involves the linkage of two sources while assuming a perfect linkage and other standard assumptions. In practice, linkage errors arise and are a potential source of bias, where the linkage is based on quasi-identifiers. These errors include false positives and false negatives, where the former arise when linking a pair of records from different units, and the latter arise when not linking a pair of records from the same unit. So far, the existing solutions have resorted to costly clerical reviews, or they have made the restrictive conditional independence assumption. In this work, these requirements are relaxed by modeling the number of links from a record instead. The same approach may be taken to estimate the linkage accuracy without clerical reviews, when linking two sources that each have some undercoverage.Release date: 2024-12-20
- 10. Design-based estimation of small and empty domains in survey data analysis using order constraintsArticles and reports: 12-001-X202400200010Description: Recent work in survey domain estimation has shown that incorporating a priori assumptions about orderings of population domain means reduces the variance of the estimators and provides smaller confidence intervals with good coverage. Here we show how partial ordering assumptions allow design-based estimation of sample means in domains for which the sample size is zero, with conservative variance estimates and confidence intervals. Order restrictions can also substantially improve estimation and inference in small-size domains. Examples with well-known survey data sets demonstrate the utility of the methods. Code to implement the examples using the R package csurvey is given in the appendix.Release date: 2024-12-20
Reference (1)
Reference (1) ((1 result))
- Surveys and statistical programs – Documentation: 84-538-XGeography: CanadaDescription: This electronic publication presents the methodology underlying the production of the life tables for Canada, provinces and territories.Release date: 2023-08-28