Keyword search
Filter results by
Search HelpKeyword(s)
Type
Year of publication
Survey or statistical program
Results
All (77)
All (77) (20 to 30 of 77 results)
- 21. Exploring Canada’s Open Government Portal ArchivedArticles and reports: 11-522-X201700014721Description:
Open data is becoming an increasingly important expectation of Canadians, researchers, and developers. Learn how and why the Government of Canada has centralized the distribution of all Government of Canada open data through Open.Canada.ca and how this initiative will continue to support the consumption of statistical information.
Release date: 2016-03-24 - Articles and reports: 11-522-X201700014731Description:
Our study describes various factors that are of concern when evaluating disclosure risk of contextualized microdata and some of the empirical steps that are involved in their assessment. Utilizing synthetic sets of survey respondents, we illustrate how different postulates shape the assessment of risk when considering: (1) estimated probabilities that unidentified geographic areas are represented within a survey; (2) the number of people in the population who share the same personal and contextual identifiers as a respondent; and (3) the anticipated amount of coverage error in census population counts and extant files that provide identifying information (like names and addresses).
Release date: 2016-03-24 - 23. A Modern Job Submission Application to Access IAB’s Confidential Administrative and Survey Research Data ArchivedArticles and reports: 11-522-X201700014732Description: The Institute for Employment Research (IAB) is the research unit of the German Federal Employment Agency. Via the Research Data Centre (FDZ) at the IAB, administrative and survey data on individuals and establishments are provided to researchers. In cooperation with the Institute for the Study of Labor (IZA), the FDZ has implemented the Job Submission Application (JoSuA) environment which enables researchers to submit jobs for remote data execution through a custom-built web interface. Moreover, two types of user-generated output files may be distinguished within the JoSuA environment which allows for faster and more efficient disclosure review services.Release date: 2016-03-24
- 24. Enhancing data sharing via "safe designs" ArchivedArticles and reports: 11-522-X201700014733Description:
The social value of data collections are dramatically enhanced by the broad dissemination of research files and the resulting increase in scientific productivity. Currently, most studies are designed with a focus on collecting information that is analytically useful and accurate, with little forethought as to how it will be shared. Both literature and practice also presume that disclosure analysis will take place after data collection. But to produce public-use data of the highest analytical utility for the largest user group, disclosure risk must be considered at the beginning of the research process. Drawing upon economic and statistical decision-theoretic frameworks and survey methodology research, this study seeks to enhance the scientific productivity of shared research data by describing how disclosure risk can be addressed in the earliest stages of research with the formulation of "safe designs" and "disclosure simulations", where an applied statistical approach has been taken in: (1) developing and validating models that predict the composition of survey data under different sampling designs; (2) selecting and/or developing measures and methods used in the assessments of disclosure risk, analytical utility, and disclosure survey costs that are best suited for evaluating sampling and database designs; and (3) conducting simulations to gather estimates of risk, utility, and cost for studies with a wide range of sampling and database design characteristics.
Release date: 2016-03-24 - Articles and reports: 11-522-X201700014734Description:
Data protection and privacy are key challenges that need to be tackled with high priority in order to enable the use of Big Data in the production of Official Statistics. This was emphasized in 2013 by the Directors of National Statistical Insitutes (NSIs) of the European Statistical System Committee (ESSC) in the Scheveningen Memorandum. The ESSC requested Eurostat and the NSIs to elaborate an action plan with a roadmap for following up the implementation of the Memorandum. At the Riga meeting on September 26, 2014, the ESSC endorsed the Big Data Action Plan and Roadmap 1.0 (BDAR) presented by the Eurostat Task Force on Big Data (TFBD) and agreed to integrate it into the ESS Vision 2020 portfolio. Eurostat also collaborates in this field with external partners such as the United Nations Economic Commission for Europe (UNECE). The big data project of the UNECE High-Level Group is an international project on the role of big data in the modernization of statistical production. It comprised four ‘task teams’ addressing different aspects of Big Data issues relevant for official statistics: Privacy, Partnerships, Sandbox, and Quality. The Privacy Task Team finished its work in 2014 and gave an overview of the existing tools for risk management regarding privacy issues, described how risk of identification relates to Big Data characteristics and drafted recommendations for National Statistical Offices (NSOs). It mainly concluded that extensions to existing frameworks, including use of new technologies were needed in order to deal with privacy risks related to the use of Big Data. The BDAR builds on the work achieved by the UNECE task teams. Specifically, it recognizes that a number of big data sources contain sensitive information, that their use for official statistics may induce negative perceptions with the general public and other stakeholders and that this risk should be mitigated in the short to medium term. It proposes to launch multiple actions like e.g., an adequate review on ethical principles governing the roles and activities of the NSIs and a strong communication strategy. The paper presents the different actions undertaken within the ESS and in collaboration with UNECE, as well as potential technical and legal solutions to be put in place in order to address the data protection and privacy risks in the use of Big Data for Official Statistics.
Release date: 2016-03-24 - Articles and reports: 11-522-X201700014735Description:
Microdata dissemination normally requires data reduction and modification methods be applied, and the degree to which these methods are applied depend on the control methods that will be required to access and use the data. An approach that is in some circumstances more suitable for accessing data for statistical purposes is secure computation, which involves computing analytic functions on encrypted data without the need to decrypt the underlying source data to run a statistical analysis. This approach also allows multiple sites to contribute data while providing strong privacy guarantees. This way the data can be pooled and contributors can compute analytic functions without either party knowing their inputs. We explain how secure computation can be applied in practical contexts, with some theoretical results and real healthcare examples.
Release date: 2016-03-24 - 27. Beta testing: Revising how Statistics Canada publishes information November 12 to December 12, 2014 ArchivedStats in brief: 11-001-X201431611101Description: Release published in The Daily – Statistics Canada’s official release bulletinRelease date: 2014-11-12
- Surveys and statistical programs – Documentation: 11-522-X201300014285Description:
The 2011 National Household Survey (NHS) is a voluntary survey that replaced the traditional mandatory long-form questionnaire of the Canadian census of population. The NHS sampled about 30% of Canadian households and achieved a design-weighted response rate of 77%. In comparison, the last census long form was sent to 20% of households and achieved a response rate of 94%. Based on the long-form data, Statistics Canada traditionally produces two public use microdata files (PUMFs): the individual PUMF and the hierarchical PUMF. Both give information on individuals, but the hierarchical PUMF provides extra information on the household and family relationships between the individuals. To produce two PUMFs, based on the NHS data, that cover the whole country evenly and that do not overlap, we applied a special sub-sampling strategy. Difficulties in the confidentiality analyses have increased because of the numerous new variables, the more detailed geographic information and the voluntary nature of the NHS. This paper describes the 2011 PUMF methodology and how it balances the requirements for more information and for low risk of disclosure.
Release date: 2014-10-31 - Articles and reports: 12-001-X201300111826Description:
It is routine practice for survey organizations to provide replication weights as part of survey data files. These replication weights are meant to produce valid and efficient variance estimates for a variety of estimators in a simple and systematic manner. Most existing methods for constructing replication weights, however, are only valid for specific sampling designs and typically require a very large number of replicates. In this paper we first show how to produce replication weights based on the method outlined in Fay (1984) such that the resulting replication variance estimator is algebraically equivalent to the fully efficient linearization variance estimator for any given sampling design. We then propose a novel weight-calibration method to simultaneously achieve efficiency and sparsity in the sense that a small number of sets of replication weights can produce valid and efficient replication variance estimators for key population parameters. Our proposed method can be used in conjunction with existing resampling techniques for large-scale complex surveys. Validity of the proposed methods and extensions to some balanced sampling designs are also discussed. Simulation results showed that our proposed variance estimators perform very well in tracking coverage probabilities of confidence intervals. Our proposed strategies will likely have impact on how public-use survey data files are produced and how these data sets are analyzed.
Release date: 2013-06-28 - 30. Combining synthetic data with subsampling to create public use microdata files for large scale surveys ArchivedArticles and reports: 12-001-X201200111687Description:
To create public use files from large scale surveys, statistical agencies sometimes release random subsamples of the original records. Random subsampling reduces file sizes for secondary data analysts and reduces risks of unintended disclosures of survey participants' confidential information. However, subsampling does not eliminate risks, so that alteration of the data is needed before dissemination. We propose to create disclosure-protected subsamples from large scale surveys based on multiple imputation. The idea is to replace identifying or sensitive values in the original sample with draws from statistical models, and release subsamples of the disclosure-protected data. We present methods for making inferences with the multiple synthetic subsamples.
Release date: 2012-06-27
- Previous Go to previous page of All results
- 1 Go to page 1 of All results
- 2 Go to page 2 of All results
- 3 (current) Go to page 3 of All results
- 4 Go to page 4 of All results
- 5 Go to page 5 of All results
- 6 Go to page 6 of All results
- 7 Go to page 7 of All results
- 8 Go to page 8 of All results
- Next Go to next page of All results
Data (1)
Data (1) ((1 result))
- 1. Historical Statistics of Canada ArchivedTable: 11-516-XDescription:
The second edition of Historical statistics of Canada was jointly produced by the Social Science Federation of Canada and Statistics Canada in 1983. This volume contains about 1,088 statistical tables on the social, economic and institutional conditions of Canada from the start of Confederation in 1867 to the mid-1970s. The tables are arranged in sections with an introduction explaining the content of each section, the principal sources of data for each table, and general explanatory notes regarding the statistics. In most cases, there is sufficient description of the individual series to enable the reader to use them without consulting the numerous basic sources referenced in the publication.
The electronic version of this historical publication is accessible on the Internet site of Statistics Canada as a free downloadable document: text as HTML pages and all tables as individual spreadsheets in a comma delimited format (CSV) (which allows online viewing or downloading).
Release date: 1999-07-29
Analysis (68)
Analysis (68) (0 to 10 of 68 results)
- Articles and reports: 11-522-X202200100007Description: With the availability of larger and more diverse data sources, Statistical Institutes in Europe are inclined to publish statistics on smaller groups than they used to do. Moreover, high impact global events like the Covid crisis and the situation in Ukraine may also ask for statistics on specific subgroups of the population. Publishing on small, targeted groups not only raises questions on statistical quality of the figures, it also raises issues concerning statistical disclosure risk. The principle of statistical disclosure control does not depend on the size of the groups the statistics are based on. However, the risk of disclosure does depend on the group size: the smaller a group, the higher the risk. Traditional ways to deal with statistical disclosure control and small group sizes include suppressing information and coarsening categories. These methods essentially increase the (mean) group sizes. More recent approaches include perturbative methods that have the intention to keep the group sizes small in order to preserve as much information as possible while reducing the disclosure risk sufficiently. In this paper we will mention some European examples of special focus group statistics and discuss the implications on statistical disclosure control. Additionally, we will discuss some issues that the use of perturbative methods brings along: its impact on disclosure risk and utility as well as the challenges in proper communication thereof.Release date: 2024-03-25
- Articles and reports: 11-633-X2024001Description: The Longitudinal Immigration Database (IMDB) is a comprehensive source of data that plays a key role in the understanding of the economic behaviour of immigrants. It is the only annual Canadian dataset that allows users to study the characteristics of immigrants to Canada at the time of admission and their economic outcomes and regional (inter-provincial) mobility over a time span of more than 35 years.Release date: 2024-01-22
- Stats in brief: 11-001-X202402237898Description: Release published in The Daily – Statistics Canada’s official release bulletinRelease date: 2024-01-22
- Articles and reports: 12-001-X202300100006Description: My comments consist of three components: (1) A brief account of my professional association with Chris Skinner. (2) Observations on Skinner’s contributions to statistical disclosure control, (3) Some comments on making inferences from masked survey data.Release date: 2023-06-30
- Articles and reports: 12-001-X202300100007Description: I provide an overview of the evolution of Statistical Disclosure Control (SDC) research over the last decades and how it has evolved to handle the data revolution with more formal definitions of privacy. I emphasize the many contributions by Chris Skinner in the research areas of SDC. I review his seminal research, starting in the 1990’s with his work on the release of UK Census sample microdata. This led to a wide-range of research on measuring the risk of re-identification in survey microdata through probabilistic models. I also focus on other aspects of Chris’ research in SDC. Chris was the recipient of the 2019 Waksberg Award and sadly never got a chance to present his Waksberg Lecture at the Statistics Canada International Methodology Symposium. This paper follows the outline that Chris had prepared in preparation for that lecture.Release date: 2023-06-30
- Articles and reports: 12-001-X202300100008Description: This brief tribute reviews Chris Skinner’s main scientific contributions.Release date: 2023-06-30
- Articles and reports: 11-633-X2022009Description:
The Longitudinal Immigration Database (IMDB) is a comprehensive source of data that plays a key role in the understanding of the economic behaviour of immigrants. It is the only annual Canadian dataset that allows users to study the characteristics of immigrants to Canada at the time of admission and their economic outcomes and regional (inter-provincial) mobility over a time span of more than 35 years.
This report will discuss the IMDB data sources, concepts and variables, record linkage, data processing, dissemination, data evaluation and quality indicators, comparability with other immigration datasets, and the analyses possible with the IMDB.
Release date: 2022-12-05 - Articles and reports: 11-633-X2022007Description:
This paper investigates how Statistics Canada can increase trust by giving users the ability to authenticate data from its website through digital signatures and blockchain technology.
Release date: 2022-09-19 - Stats in brief: 89-20-00082021001Description: This video is part of the confidentiality vetting support series and presents examples of how to use SAS to perform the dominance and homogeneity test while using the Census.Release date: 2022-04-29
- Stats in brief: 89-20-00082021002Description: This video is part of the confidentiality vetting support series and presents examples of how to use SAS to create proportion output for researchers working with confidential data.Release date: 2022-04-27
- Previous Go to previous page of Analysis results
- 1 (current) Go to page 1 of Analysis results
- 2 Go to page 2 of Analysis results
- 3 Go to page 3 of Analysis results
- 4 Go to page 4 of Analysis results
- 5 Go to page 5 of Analysis results
- 6 Go to page 6 of Analysis results
- 7 Go to page 7 of Analysis results
- Next Go to next page of Analysis results
Reference (9)
Reference (9) ((9 results))
- Surveys and statistical programs – Documentation: 32-26-0006Description: This report provides data quality information pertaining to the Agriculture–Population Linkage, such as sources of error, matching process, response rates, imputation rates, sampling, weighting, disclosure control methods and data quality indicators.Release date: 2023-08-25
- Geographic files and documentation: 12-572-XDescription:
The Standard Geographical Classification (SGC) provides a systematic classification structure that categorizes all of the geographic area of Canada. The SGC is the official classification used in the Census of Population and other Statistics Canada surveys.
The classification is organized in two volumes: Volume I, The Classification and Volume II, Reference Maps.
Volume II contains reference maps showing boundaries, names, codes and locations of the geographic areas in the classification. The reference maps show census subdivisions, census divisions, census metropolitan areas, census agglomerations, census metropolitan influenced zones and economic regions. Definitions for these terms are found in Volume I, The Classification. Volume I describes the classification and related standard geographic areas and place names.
The maps in Volume II can be downloaded in PDF format from our website.
Release date: 2022-02-09 - Surveys and statistical programs – Documentation: 11-522-X201300014285Description:
The 2011 National Household Survey (NHS) is a voluntary survey that replaced the traditional mandatory long-form questionnaire of the Canadian census of population. The NHS sampled about 30% of Canadian households and achieved a design-weighted response rate of 77%. In comparison, the last census long form was sent to 20% of households and achieved a response rate of 94%. Based on the long-form data, Statistics Canada traditionally produces two public use microdata files (PUMFs): the individual PUMF and the hierarchical PUMF. Both give information on individuals, but the hierarchical PUMF provides extra information on the household and family relationships between the individuals. To produce two PUMFs, based on the NHS data, that cover the whole country evenly and that do not overlap, we applied a special sub-sampling strategy. Difficulties in the confidentiality analyses have increased because of the numerous new variables, the more detailed geographic information and the voluntary nature of the NHS. This paper describes the 2011 PUMF methodology and how it balances the requirements for more information and for low risk of disclosure.
Release date: 2014-10-31 - 4. 2006 Census Dissemination Consultation Report ArchivedNotices and consultations: 92-132-XDescription:
This report describes the comments received as a result of the second round of the 2006 Census consultations. As with the previous 2006 Census consultation, this second round of consultations integrated discussions on the dissemination program, questionnaire content and census geography. However, the focus of this second round of consultations was placed on the 2001 Census of Population dissemination program and proposed directions for 2006 geography. Consultations were held from January to June 2004. Approximately 1,000 comments were captured through written submissions and the organization of over 40 meetings across Canada.
This report describes users' feedback on dissemination and geography issues received through this second round of consultations. In addition to user's comments, web metrics information serves as a valuable tool when evaluating the accessibility of public good data tables. Therefore, page view counts have been integrated in this report.
Some general planning assumptions that focus on the production and dissemination of 2006 Census products are also included in this report.
Release date: 2005-05-31 - 5. Consultation Guide - 2001 Census of Population Dissemination and Proposed Directions for 2006 Geography ArchivedNotices and consultations: 92-131-GDescription:
This guide has been developed to help users convey their ideas and suggestions to Statistics Canada regarding the 2001 Census products and services line. It contains a series of questions about specific dissemination issues and topics related to the 2001 Census dissemination strategy. The document covers many aspects of census dissemination. Readers are welcome to focus on sections of particular interest to them. In addition, users are welcome to provide comments on any other census-related issues during this consultation process.
Release date: 2004-04-08 - Surveys and statistical programs – Documentation: 75F0002M2003002Description:
This series provides detailed documentation on income developments, including survey design issues, data quality evaluation and exploratory research for the Survey of Labour and Income Dynamics in 2000.
Release date: 2003-06-11 - Surveys and statistical programs – Documentation: 75F0002M199303ADescription:
This paper is intended as an initial proposal for a strategy for the Survey of Labour and Income Dynamics (SLID) longitudinal microdata files.
Release date: 1995-12-30 - Surveys and statistical programs – Documentation: 75F0002M199303BDescription:
This paper presents detailed information on specific data variables for the Survey of Labour and Income Dynamics (SLID) microdata files.
Release date: 1995-12-30 - Surveys and statistical programs – Documentation: 75F0002M1995018Description:
This paper presents a preview of the variables on the first microdata file of the Survey of Labour and Income Dynamics.
Release date: 1995-12-30
- Date modified: