Filter results by

Search Help
Currently selected filters that can be removed

Keyword(s)

Survey or statistical program

81 facets displayed. 0 facets selected.

Content

1 facets displayed. 0 facets selected.
Sort Help
entries

Results

All (1,891)

All (1,891) (1,870 to 1,880 of 1,891 results)

  • Articles and reports: 12-001-X197900100002
    Description: This paper includes a description of interviewer techniques and procedures used to minimize non-response, an outline of methods used to monitor and control non-response, and a discussion of how non-respondents are treated in the data processing and estimation stages of the Canadian Labour Force Survey. Recent non-response rates as well as data on the characteristics of non-respondents are also given. It is concluded that a yearly non-response rate of approximately 5 percent is probably the best that can be achieved in the Labour Force Survey.
    Release date: 1979-06-15

  • Articles and reports: 12-001-X197900100003
    Description: Two methods for estimating the correlated response variance of a survey estimator are studied by way of both theoretical comparison and empirical investigation. The variance of these estimators is discussed and the effects of outliers examined. Finally, an improved estimator is developed and evaluated.
    Release date: 1979-06-15

  • Articles and reports: 12-001-X197900100004
    Description: Let U = {1, 2, …, i, …, N} be a finite population of N identifiable units. A known “size measure” x_i is associated with unit i; i = 1, 2, ..., N. A sampling procedure for selecting a sample of size n (2 < n < N) with probability proportional to size (PPS) and without replacement (WOR) from the population is proposed. With this method, the inclusion probability is proportional to size (IPPS) for each unit in the population.
    Release date: 1979-06-15

  • Articles and reports: 12-001-X197900100005
    Description: Approximate cutoff rules for stratifying a population into a take-all and take-some universe have been given by Dalenius (1950) and Glasser (1962). They expressed the cutoff value (that value which delineates the boundary of the take-all and take-some) as a function of the mean, the sampling weight and the population variance. Their cutoff values were derived on the assumption that a single random sample of size n was to be drawn without replacement from the population of size N.

    In the present context, exact and approximate cutoff rules have been worked out for a similar situation. Rather than providing the sample size of the sample, the precision (coefficient of variation) is given. Note that in many sampling situations, the sampler is given a set of objectives in terms of reliability and not sample size. The result is particularly useful for determining the take-all - take-some boundary for samples drawn from a known population. The procedure is also extended to ratio estimation.
    Release date: 1979-06-15

  • Articles and reports: 12-001-X197900100006
    Description: Under a sequential sampling plan, the proportion defective in the sample is generally a biased estimator of the population value. In this paper, an unbiased estimator is given. Also, an unbiased estimator of its variance is derived. These results are applied to an estimation problem from the 1976 Canadian Census.
    Release date: 1979-06-15

  • Articles and reports: 12-001-X197800254832
    Description: I.P. Fellegi and D. Holt proposed a systematic approach to automatic edit and imputation. An implementation of this proposal was a Generalized Edit and Imputation System by the Hot-Deck Approach, that was utilized in the edit and imputation of the 1976 Canadian Census of Population and Housing. This paper discusses that application, evaluating the strengths and weaknesses of the methodology with some empirical evidence. The system will be considered in relation to the general issues of the edit and imputation of survey data. Some directions for future developments will also be considered.
    Release date: 1978-12-15

  • Articles and reports: 12-001-X197800254833
    Description: Owners of small businesses complain about the quantity of forms they are required to collectors of statistics. Administrative data are an alternative source but do not usually include all the information required by the survey takers.

    The “Tax Data Imputation System” makes use of tax data collected from a large number of businesses by Revenue Canada and data obtained by sample survey for a small subset of these businesses. Survey data is imputed (estimated) for all the businesses not actually surveyed using a “hot-deck” technique, with adjustments made to ensure certain edit rules are satisfied. The results of a simulation study suggest that this procedure has reasonable statistical properties. Estimators (of means or totals) are unbiased with variances of comparable size to the corresponding ratio estimators.
    Release date: 1978-12-15

  • Articles and reports: 12-001-X197800254834
    Description: Frames designed for continuous surveys are sometimes used for ad hoc surveys which require selection of sampling units separate from those selected for the continuous survey. This paper presents an unbiased extension of Keyfitz’s (1951) sample updating method to the case where a portion of the frame has been reserved for surveys other than the main continuous survey. A simple although biased alternative is presented.

    The scope under Platek and Singh’s (1975) design strategy for an area based continuous survey requiring updating is then expanded to encompass rotation of first stage units, establishment of a separate special survey sub-frame, and procedures to prevent re-selection of ultimate sampling units.

    The methods are evaluated in a Monte Carlo study using Census data to simulate the design for the Canadian Labour Force Survey.
    Release date: 1978-12-15

  • Articles and reports: 12-001-X197800254835
    Description: Some estimators alternative to the usual PPS estimator are suggested in this paper for situations where the size measure used for PPS sampling is not correlated with the study variable and where data are available on another supplementary variable (size measure). Properties of these estimators are studied under super-population models and also empirically.
    Release date: 1978-12-15

  • Articles and reports: 12-001-X197800254830
    Description:

    The problems of dealing with non-response at various stages of survey planning are discussed with implications for the mean square error, practicality and possible advantages and disadvantages. Conceptual issues of editing and imputation are also considered with regard to complexity and levels of imputation. The methods of imputation include weighting, duplication, and substitution of historical records. The paper includes some methodology on the bias and variance.

    Release date: 1978-12-15
Stats in brief (83)

Stats in brief (83) (50 to 60 of 83 results)

Articles and reports (1,783)

Articles and reports (1,783) (10 to 20 of 1,783 results)

  • Articles and reports: 12-001-X202400100010
    Description: This discussion summarizes the interesting new findings around measurement errors in opt-in surveys by Kennedy, Mercer and Lau (KML). While KML enlighten readers about “bogus responding” and possible patterns in them, this discussion suggests combining these new-found results with other avenues of research in nonprobability sampling, such as improvement of representativeness.
    Release date: 2024-06-25

  • Articles and reports: 12-001-X202400100011
    Description: Kennedy, Mercer, and Lau explore misreporting by respondents in non-probability samples and discover a new feature, namely that of deliberate misreporting of demographic characteristics. This finding suggests that the “arms race” between researchers and those determined to disrupt the practice of social science is not over and researchers need to account for such respondents if using high-quality probability surveys to help reduce error in non-probability samples.
    Release date: 2024-06-25

  • Articles and reports: 12-001-X202400100012
    Description: Nonprobability samples are quick and low-cost and have become popular for some types of survey research. Kennedy, Mercer and Lau examine data quality issues associated with opt-in nonprobability samples frequently used in the United States. They show that the estimates from these samples have serious problems that go beyond representativeness. A total survey error perspective is important for evaluating all types of surveys.
    Release date: 2024-06-25

  • Articles and reports: 12-001-X202400100013
    Description: Statistical approaches developed for nonprobability samples generally focus on nonrandom selection as the primary reason survey respondents might differ systematically from the target population. Well-established theory states that in these instances, by conditioning on the necessary auxiliary variables, selection can be rendered ignorable and survey estimates will be free of bias. But this logic rests on the assumption that measurement error is nonexistent or small. In this study we test this assumption in two ways. First, we use a large benchmarking study to identify subgroups for which errors in commercial, online nonprobability samples are especially large in ways that are unlikely due to selection effects. Then we present a follow-up study examining one cause of the large errors: bogus responding (i.e., survey answers that are fraudulent, mischievous or otherwise insincere). We find that bogus responding, particularly among respondents identifying as young or Hispanic, is a significant and widespread problem in commercial, online nonprobability samples, at least in the United States. This research highlights the need for statisticians working with commercial nonprobability samples to address bogus responding and issues of representativeness – not just the latter.
    Release date: 2024-06-25

  • Articles and reports: 12-001-X202400100014
    Description: This paper is an introduction to the special issue on the use of nonprobability samples featuring three papers that were presented at the 29th Morris Hansen Lecture by Courtney Kennedy, Yan Li and Jean-François Beaumont.
    Release date: 2024-06-25

  • Articles and reports: 75F0002M2024005
    Description: The Canadian Income Survey (CIS) has introduced improvements to the methods and data sources used to produce income and poverty estimates with the release of its 2022 reference year estimates. Foremost among these improvements is a significant increase in the sample size for a large subset of the CIS content. The weighting methodology was also improved and the target population of the CIS was changed from persons aged 16 years and over to persons aged 15 years and over. This paper describes the changes made and presents the approximate net result of these changes on the income estimates and data quality of the CIS using 2021 data. The changes described in this paper highlight the ways in which data quality has been improved while having little impact on key CIS estimates and trends.
    Release date: 2024-04-26

  • Articles and reports: 18-001-X2024001
    Description: This study applies small area estimation (SAE) and a new geographic concept called Self-contained Labor Area (SLA) to the Canadian Survey on Business Conditions (CSBC) with a focus on remote work opportunities in rural labor markets. Through SAE modelling, we estimate the proportions of businesses, classified by general industrial sector (service providers and goods producers), that would primarily offer remote work opportunities to their workforce.
    Release date: 2024-04-22

  • Articles and reports: 11-522-X202200100001
    Description: Record linkage aims at identifying record pairs related to the same unit and observed in two different data sets, say A and B. Fellegi and Sunter (1969) suggest each record pair is tested whether generated from the set of matched or unmatched pairs. The decision function consists of the ratio between m(y) and u(y),probabilities of observing a comparison y of a set of k>3 key identifying variables in a record pair under the assumptions that the pair is a match or a non-match, respectively. These parameters are usually estimated by means of the EM algorithm using as data the comparisons on all the pairs of the Cartesian product ?=A×B. These observations (on the comparisons and on the pairs status as match or non-match) are assumed as generated independently of other pairs, assumption characterizing most of the literature on record linkage and implemented in software tools (e.g. RELAIS, Cibella et al. 2012). On the contrary, comparisons y and matching status in ? are deterministically dependent. As a result, estimates on m(y) and u(y) based on the EM algorithm are usually bad. This fact jeopardizes the effective application of the Fellegi-Sunter method, as well as automatic computation of quality measures and possibility to apply efficient methods for model estimation on linked data (e.g. regression functions), as in Chambers et al. (2015). We propose to explore ? by a set of samples, each one drawn so to preserve independence of comparisons among the selected record pairs. Simulations are encouraging.
    Release date: 2024-03-25

  • Articles and reports: 11-522-X202200100002
    Description: The authors used the Splink probabilistic linkage package developed by the UK Ministry of Justice, to link census data from England and Wales to itself to find duplicate census responses. A large gold standard of confirmed census duplicates was available meaning that the results of the Splink implementation could be quality assured. This paper describes the implementation and features of Splink, gives details of the settings and parameters that we used to tune Splink for our particular project, and gives the results that we obtained.
    Release date: 2024-03-25

  • Articles and reports: 11-522-X202200100003
    Description: Estimation at fine levels of aggregation is necessary to better describe society. Small area estimation model-based approaches that combine sparse survey data with rich data from auxiliary sources have been proven useful to improve the reliability of estimates for small domains. Considered here is a scenario where small area model-based estimates, produced at a given aggregation level, needed to be disaggregated to better describe the social structure at finer levels. For this scenario, an allocation method was developed to implement the disaggregation, overcoming challenges associated with data availability and model development at such fine levels. The method is applied to adult literacy and numeracy estimation at the county-by-group-level, using data from the U.S. Program for the International Assessment of Adult Competencies. In this application the groups are defined in terms of age or education, but the method could be applied to estimation of other equity-deserving groups.
    Release date: 2024-03-25
Journals and periodicals (25)

Journals and periodicals (25) (0 to 10 of 25 results)

  • Journals and periodicals: 11-522-X
    Description: Since 1984, an annual international symposium on methodological issues has been sponsored by Statistics Canada. Proceedings have been available since 1987.
    Release date: 2024-06-28

  • Journals and periodicals: 12-001-X
    Geography: Canada
    Description: The journal publishes articles dealing with various aspects of statistical development relevant to a statistical agency, such as design issues in the context of practical constraints, use of different data sources and collection techniques, total survey error, survey evaluation, research in survey methodology, time series analysis, seasonal adjustment, demographic studies, data integration, estimation and data analysis methods, and general survey systems development. The emphasis is placed on the development and evaluation of specific methodologies as applied to data collection or the data themselves.
    Release date: 2024-06-25

  • Journals and periodicals: 75F0002M
    Description: This series provides detailed documentation on income developments, including survey design issues, data quality evaluation and exploratory research.
    Release date: 2024-04-26

  • Journals and periodicals: 11-633-X
    Description: Papers in this series provide background discussions of the methods used to develop data for economic, health, and social analytical studies at Statistics Canada. They are intended to provide readers with information on the statistical methods, standards and definitions used to develop databases for research purposes. All papers in this series have undergone peer and institutional review to ensure that they conform to Statistics Canada's mandate and adhere to generally accepted standards of good professional practice.
    Release date: 2024-01-22

  • Journals and periodicals: 12-206-X
    Description: This report summarizes the annual achievements of the Methodology Research and Development Program (MRDP) sponsored by the Modern Statistical Methods and Data Science Branch at Statistics Canada. This program covers research and development activities in statistical methods with potentially broad application in the agency’s statistical programs; these activities would otherwise be less likely to be carried out during the provision of regular methodology services to those programs. The MRDP also includes activities that provide support in the application of past successful developments in order to promote the use of the results of research and development work. Selected prospective research activities are also presented.
    Release date: 2023-10-11

  • Journals and periodicals: 92F0138M
    Description:

    The Geography working paper series is intended to stimulate discussion on a variety of topics covering conceptual, methodological or technical work to support the development and dissemination of the division's data, products and services. Readers of the series are encouraged to contact the Geography Division with comments and suggestions.

    Release date: 2019-11-13

  • Journals and periodicals: 89-20-0001
    Description:

    Historical works allow readers to peer into the past, not only to satisfy our curiosity about “the way things were,” but also to see how far we’ve come, and to learn from the past. For Statistics Canada, such works are also opportunities to commemorate the agency’s contributions to Canada and its people, and serve as a reminder that an institution such as this continues to evolve each and every day.

    On the occasion of Statistics Canada’s 100th anniversary in 2018, Standing on the shoulders of giants: History of Statistics Canada: 1970 to 2008, builds on the work of two significant publications on the history of the agency, picking up the story in 1970 and carrying it through the next 36 years, until 2008. To that end, when enough time has passed to allow for sufficient objectivity, it will again be time to document the agency’s next chapter as it continues to tell Canada’s story in numbers.

    Release date: 2018-12-03

  • Journals and periodicals: 12-605-X
    Description:

    The Record Linkage Project Process Model (RLPPM) was developed by Statistics Canada to identify the processes and activities involved in record linkage. The RLPPM applies to linkage projects conducted at the individual and enterprise level using diverse data sources to create new data sources to meet analytical and operational needs.

    Release date: 2017-06-05

  • Journals and periodicals: 91-621-X
    Description:

    This document briefly describes Demosim, the microsimulation population projection model, how it works as well as its methods and data sources. It is a methodological complement to the analytical products produced using Demosim.

    Release date: 2017-01-25

  • Journals and periodicals: 11-634-X
    Description:

    This publication is a catalogue of strategies and mechanisms that a statistical organization should consider adopting, according to its particular context. This compendium is based on lessons learned and best practices of leadership and management of statistical agencies within the scope of Statistics Canada’s International Statistical Fellowship Program (ISFP). It contains four broad sections including, characteristics of an effective national statistical system; core management practices; improving, modernizing and finding efficiencies; and, strategies to better inform and engage key stakeholders.

    Release date: 2016-07-06
Date modified: