Filter results by

Search Help
Currently selected filters that can be removed

Keyword(s)

Survey or statistical program

81 facets displayed. 0 facets selected.

Content

1 facets displayed. 0 facets selected.
Sort Help
entries

Results

All (1,891)

All (1,891) (1,770 to 1,780 of 1,891 results)

  • Articles and reports: 12-001-X198600214449
    Description:

    Nearly all surveys and censuses are subject to two types of nonresponse: unit (total) and item (partial). Several methods of compensating for nonresponse have been developed in an attempt to reduce the bias associated with nonresponse. This paper summarizes the nonresponse adjustment procedures used at the U.S. Census Bureau, focusing on unit nonresponse. Some discussion of current and future research in this area is also included.

    Release date: 1986-12-15

  • Articles and reports: 12-001-X198600214450
    Description:

    From an annual sample of U.S. corporate tax returns, the U.S. Internal Revenue Service provides estimates of population and subpopulation totals for several hundred financial items. The basic sample design is highly stratified and fairly complex. Starting with the 1981 and 1982 samples, the design was altered to include a double sampling procedure. This was motivated by the need for better allocation of resources, in an environment of shrinking budgets. Items not observed in the subsample are predicted, using a modified hot deck imputation procedure. The present paper describes the design, estimation, and evaluation of the effects of the new procedure.

    Release date: 1986-12-15

  • Articles and reports: 12-001-X198600214451
    Description:

    The Canadian Census of Construction (COC) uses a complex plan for sampling small businesses (those having a gross income of less than $750,000). Stratified samples are drawn from overlapping frames. Two subsamples are selected independently from one of the samples, and more detailed information is collected on the businesses in the subsamples. There are two possible methods of estimating totals for the variables collected in the subsamples. The first approach is to determine weights based on sampling rates. A number of different weights must be used. The second approach is to impute values to the businesses included in the sample but not in the subsamples. This approach creates a complete “rectangular” sample file, and a single weight may then be used to produce estimates for the population. This “large-scale imputation” technique is presently applied for the Census of Construction. The purpose of the study is to compare the figures obtained using various estimation techniques with the estimates produced by means of large-scale imputation.

    Release date: 1986-12-15

  • Articles and reports: 12-001-X198600214462
    Description:

    In the presence of unit nonresponse, two types of variables can sometimes be observed for units in the “intended” sample s, namely, (a) variables used to estimate the response mechanism (the response probabilities), (b) variables (here called co-variates) that explain the variable of interest, in the usual regression theory sense. This paper, based on Särndal and Swensson (1985 a, b), discusses nonresponse adjusted estimators with and without explicit involvement of co-variates. We conclude that the presence of strong co-variates in an estimator induces several favourable properties. Among other things, estimators making use of co-variates are considerably more resistant to nonresponse bias. We discuss the calculation of standard error and valid confidence intervals for estimators involving co-variates. The structure of the standard error is examined and discussed.

    Release date: 1986-12-15

  • Articles and reports: 12-001-X198600214463
    Description:

    The procedure of subsampling the nonrespondents suggested by Hansen and Hurwitz (1946) is considered. Post-stratification prior to the subsampling is examined. For the mean of a characteristic of interest, ratio estimators suitable for different practical situations are proposed and their merits are examined. Suitable ratio estimators are also suggested for the situations in which the Hard-Core are present.

    Release date: 1986-12-15

  • Articles and reports: 12-001-X198600114404
    Description:

    Missing survey data occur because of total nonresponse and item nonresponse. The standard way to attempt to compensate for total nonresponse is by some form of weighting adjustment, whereas item nonresponses are handled by some form of imputation. This paper reviews methods of weighting adjustment and imputation and discusses their properties.

    Release date: 1986-06-16

  • Articles and reports: 12-001-X198600114437
    Description:

    In this paper, different types of response/nonresponse and associated measures such as rates are provided and discussed together with their implications on both estimation and administrative procedures. The missing data problems lead to inconsistent terminology related to nonresponse such as completion rates, eligibility rates, contact rates, and refusal rates, many of which can be defined in different ways. In addition, there are item nonresponse rates as well as characteristic response rates. Depending on the uses, the rates may be weighted or unweighted.

    Release date: 1986-06-16

  • Articles and reports: 12-001-X198600114438
    Description:

    Using the optimal estimating functions for survey sampling estimation (Godambe and Thompson 1986), we obtain some optimality results for nonresponse situations in survey sampling.

    Release date: 1986-06-16

  • Articles and reports: 12-001-X198600114439
    Description:

    Multiple imputation is a technique for handling survey nonresponse that replaces each missing value created by nonresponse by a vector of possible values that reflect uncertainty about which values to impute. A simple example and brief overview of the underlying theory are used to introduce the general procedure.

    Release date: 1986-06-16

  • Articles and reports: 12-001-X198600114440
    Description:

    Statistics Canada has undertaken a project to develop a generalized edit and imputation system, the intent of which is to meet the processing requirements of most of its surveys. The various approaches to imputation for item non-response, which have been proposed, will be discussed. Important issues related to the implementation of these proposals into a generalized setting will also be addressed.

    Release date: 1986-06-16
Stats in brief (83)

Stats in brief (83) (30 to 40 of 83 results)

  • Stats in brief: 89-20-00062021004
    Description:

    One important distinction we will make in this video is the differences between Data Science, Artificial Intelligence and Machine Learning. You'll learn what machine learning can be used for, how it works, and some different methods for doing it. And you'll also learn how to build and use machine learning processes responsibly.

    This video is recommended for those who already have some familiarity with the concepts and techniques associated with computer programming and using algorithms to analyze data.

    Release date: 2021-05-03

  • Stats in brief: 89-20-00062021005
    Description:

    By the end of this video, you should have a deeper understanding of the fundamentals of using data to tell a story. We will go over some the principle components of storytelling including the data, the narrative and visualization, and discuss how they can be used to construct concise, informative and interesting messages your audience can trust. And then, you will learn the importance of a well planned data story, which includes learning who your audience will be, what they should know and how to best deliver that information.

    Release date: 2021-05-03

  • Stats in brief: 89-20-00062021006
    Description:

    In this video, you'll learn what we can do to data itself, to make it easier to work with. That's the role of data standards. And you'll learn what extra information we can provide to make data easier to use. That's the role of metadata.

    Release date: 2021-05-03

  • Stats in brief: 11-001-X202104628783
    Description: Release published in The Daily – Statistics Canada’s official release bulletin
    Release date: 2021-02-15

  • Stats in brief: 11-627-M2020072
    Description:

    This infographic provides an overview of the Canadian Research and Development Classification (CRDC), a national standard jointly developed by the Canada Foundation for Innovation (CFI), the Canadian Institutes of Health Research (CIHR), the Natural Sciences and Engineering Research Council of Canada (NSERC), the Social Sciences and Humanities Research Council of Canada (SSHRC), and Statistics Canada.

    Release date: 2020-10-05

  • Stats in brief: 89-20-00062020001
    Description:

    In this video, you will be introduced to the fundamentals of data quality, which can be summed up in six dimensions—or six different ways to think about quality. You will also learn how each dimension can be used to evaluate the quality of data.

    Release date: 2020-09-23

  • Stats in brief: 89-20-00062020004
    Description:

    In this module, we will explore the concepts of data and statistical information, and the differences between them. You will also learn about the different types of data.

    Release date: 2020-09-23

  • Stats in brief: 89-20-00062020005
    Description:

    Data gathering involves first determining what data you need, then where to find it, how to get it and how to keep it safe. This module introduces you to things you should consider when gathering data.

    Release date: 2020-09-23

  • Stats in brief: 89-20-00062020006
    Description:

    The data terminology and concepts covered in this video are datasets, databases, data protection, data variables, micro and macro data, and statistical information.

    Release date: 2020-09-23

  • Stats in brief: 89-20-00062020007
    Description:

    In this video you will learn about the steps and activities in the data journey, as well as the foundation supporting it.

    Release date: 2020-09-23
Articles and reports (1,783)

Articles and reports (1,783) (50 to 60 of 1,783 results)

  • Articles and reports: 12-001-X202300200011
    Description: The article considers sampling designs for populations that can be represented as a N × M matrix. For instance when investigating tourist activities, the rows could be locations visited by tourists and the columns days in the tourist season. The goal is to sample cells (i, j) of the matrix when the number of selections within each row and each column is fixed a priori. The ith row sample size represents the number of selected cells within row i; the jth column sample size is the number of selected cells within column j. A matrix sampling design gives an N × M matrix of sample indicators, with entry 1 at position (i, j) if cell (i, j) is sampled and 0 otherwise. The first matrix sampling design investigated has one level of sampling, row and column sample sizes are set in advance: the row sample sizes can vary while the column sample sizes are all equal. The fixed margins can be seen as balancing constraints and algorithms available for selecting such samples are reviewed. A new estimator for the variance of the Horvitz-Thompson estimator for the mean of survey variable y is then presented. Several levels of sampling might be necessary to account for all the constraints; this involves multi-level matrix sampling designs that are also investigated.
    Release date: 2024-01-03

  • Articles and reports: 12-001-X202300200012
    Description: In recent decades, many different uses of auxiliary information have enriched survey sampling theory and practice. Jean-Claude Deville contributed significantly to this progress. My comments trace some of the steps on the way to one important theory for the use of auxiliary information: Estimation by calibration.
    Release date: 2024-01-03

  • Articles and reports: 12-001-X202300200013
    Description: Jean-Claude Deville is one of the most prominent researcher in survey sampling theory and practice. His research on balanced sampling, indirect sampling and calibration in particular is internationally recognized and widely used in official statistics. He was also a pioneer in the field of functional data analysis. This discussion gives us the opportunity to recognize the immense work he has accomplished, and to pay tribute to him. In the first part of this article, we recall briefly his contribution to the functional principal analysis. We also detail some recent extension of his work at the intersection of the fields of functional data analysis and survey sampling. In the second part of this paper, we present some extension of Jean-Claude’s work in indirect sampling. These extensions are motivated by concrete applications and illustrate Jean-Claude’s influence on our work as researchers.
    Release date: 2024-01-03

  • Articles and reports: 12-001-X202300200014
    Description: Many things have been written about Jean-Claude Deville in tributes from the statistical community (see Tillé, 2022a; Tillé, 2022b; Christine, 2022; Ardilly, 2022; and Matei, 2022) and from the École nationale de la statistique et de l’administration économique (ENSAE) and the Société française de statistique. Pascal Ardilly, David Haziza, Pierre Lavallée and Yves Tillé provide an in-depth look at Jean-Claude Deville’s contributions to survey theory. To pay tribute to him, I would like to discuss Jean-Claude Deville’s contribution to the more day-to-day application of methodology for all the statisticians at the Institut national de la statistique et des études économiques (INSEE) and at the public statistics service. To do this, I will use my work experience, and particularly the four years (1992 to 1996) I spent working with him in the Statistical Methods Unit and the discussions we had thereafter, especially in the 2000s on the rolling census.
    Release date: 2024-01-03

  • Articles and reports: 12-001-X202300200015
    Description: This article discusses and provides comments on the Ardilly, Haziza, Lavallée and Tillé’s summary presentation of Jean-Claude Deville’s work on survey theory. It sheds light on the context, applications and uses of his findings, and shows how these have become engrained in the role of statisticians, in which Jean-Claude was a trailblazer. It also discusses other aspects of his career and his creative inventions.
    Release date: 2024-01-03

  • Articles and reports: 12-001-X202300200016
    Description: In this discussion, I will present some additional aspects of three major areas of survey theory developed or studied by Jean-Claude Deville: calibration, balanced sampling and the generalized weight-share method.
    Release date: 2024-01-03

  • Articles and reports: 12-001-X202300200017
    Description: Jean-Claude Deville, who passed away in October 2021, was one of the most influential researchers in the field of survey statistics over the past 40 years. This article traces some of his contributions that have had a profound impact on both survey theory and practice. This article will cover the topics of balanced sampling using the cube method, calibration, the weight-sharing method, the development of variance expressions of complex estimators using influence function and quota sampling.
    Release date: 2024-01-03

  • Articles and reports: 12-001-X202300200018
    Description: Sample surveys, as a tool for policy development and evaluation and for scientific, social and economic research, have been employed for over a century. In that time, they have primarily served as tools for collecting data for enumerative purposes. Estimation of these characteristics has been typically based on weighting and repeated sampling, or design-based, inference. However, sample data have also been used for modelling the unobservable processes that gave rise to the finite population data. This type of use has been termed analytic, and often involves integrating the sample data with data from secondary sources.

    Alternative approaches to inference in these situations, drawing inspiration from mainstream statistical modelling, have been strongly promoted. The principal focus of these alternatives has been on allowing for informative sampling. Modern survey sampling, though, is more focussed on situations where the sample data are in fact part of a more complex set of data sources all carrying relevant information about the process of interest. When an efficient modelling method such as maximum likelihood is preferred, the issue becomes one of how it should be modified to account for both complex sampling designs and multiple data sources. Here application of the Missing Information Principle provides a clear way forward.

    In this paper I review how this principle has been applied to resolve so-called “messy” data analysis issues in sampling. I also discuss a scenario that is a consequence of the rapid growth in auxiliary data sources for survey data analysis. This is where sampled records from one accessible source or register are linked to records from another less accessible source, with values of the response variable of interest drawn from this second source, and where a key output is small area estimates for the response variable for domains defined on the first source.
    Release date: 2024-01-03

  • Articles and reports: 82-003-X202301200002
    Description: The validity of survival estimates from cancer registry data depends, in part, on the identification of the deaths of deceased cancer patients. People whose deaths are missed seemingly live on forever and are informally referred to as “immortals”, and their presence in registry data can result in inflated survival estimates. This study assesses the issue of immortals in the Canadian Cancer Registry (CCR) using a recently proposed method that compares the survival of long-term survivors of cancers for which “statistical” cure has been reported with that of similar people from the general population.
    Release date: 2023-12-20

  • Articles and reports: 11-633-X2023003
    Description: This paper spans the academic work and estimation strategies used in national statistics offices. It addresses the issue of producing fine, grid-level geography estimates for Canada by exploring the measurement of subprovincial and subterritorial gross domestic product using Yukon as a test case.
    Release date: 2023-12-15
Journals and periodicals (25)

Journals and periodicals (25) (20 to 30 of 25 results)

  • Journals and periodicals: 85F0036X
    Geography: Canada
    Description:

    This study documents the methodological and technical challenges that are involved in performing analysis on small groups using a sample survey, oversampling, response rate, non-response rate due to language, release feasibility and sampling variability. It is based on the 1999 General Social Survey (GSS) on victimization.

    Release date: 2002-05-14

  • 22. Low Income Cut-offs Archived
    Journals and periodicals: 13-551-X
    Description:

    Low income cut-offs (LICOs) are intended to convey the income level at which a family may be in straitened circumstances because it has to spend a greater portion of its income on the basics (food, clothing and shelter) than does the average family of similar size. The LICOs vary by family size and by size of community.

    This publication provides a brief explanation of how the LICOs are derived and updated annually. In addition, it provides on a historical basis, LICOs for different family sizes by size of area of residence. LICOs are calculated based on the spending patterns of families on basic 'necessities' - food, shelter and clothing - as collected from the Survey of Household Spending (formerly referred to as the Family Expenditure Survey (FAMEX)).

    Release date: 1999-12-10

  • Journals and periodicals: 84F0013X
    Geography: Canada, Province or territory
    Description:

    This study was initiated to test the validity of probabilistic linkage methods used at Statistics Canada. It compared the results of data linkages on infant deaths in Canada with infant death data from Nova Scotia and Alberta. It also compared the availability of fetal deaths on the national and provincial files.

    Release date: 1999-10-08

  • Table: 11-516-X
    Description:

    The second edition of Historical statistics of Canada was jointly produced by the Social Science Federation of Canada and Statistics Canada in 1983. This volume contains about 1,088 statistical tables on the social, economic and institutional conditions of Canada from the start of Confederation in 1867 to the mid-1970s. The tables are arranged in sections with an introduction explaining the content of each section, the principal sources of data for each table, and general explanatory notes regarding the statistics. In most cases, there is sufficient description of the individual series to enable the reader to use them without consulting the numerous basic sources referenced in the publication.

    The electronic version of this historical publication is accessible on the Internet site of Statistics Canada as a free downloadable document: text as HTML pages and all tables as individual spreadsheets in a comma delimited format (CSV) (which allows online viewing or downloading).

    Release date: 1999-07-29

  • Journals and periodicals: 88-522-X
    Description:

    The framework described here is intended as a basic operational instrument for systematic development of statistical information respecting the evolution of science and technology and its interactions with the society, the economy and the political system of which it is a part.

    Release date: 1999-02-24
Date modified: