Keyword search

Filter results by

Search Help
Currently selected filters that can be removed

Keyword(s)

Year of publication

1 facets displayed. 1 facets selected.

Survey or statistical program

102 facets displayed. 0 facets selected.

Content

1 facets displayed. 0 facets selected.
Sort Help
entries

Results

All (817)

All (817) (0 to 10 of 817 results)

  • Articles and reports: 12-001-X200800210754
    Description:

    The context of the discussion is the increasing incidence of international surveys, of which one is the International Tobacco Control (ITC) Policy Evaluation Project, which began in 2002. The ITC country surveys are longitudinal, and their aim is to evaluate the effects of policy measures being introduced in various countries under the WHO Framework Convention on Tobacco Control. The challenges of organization, data collection and analysis in international surveys are reviewed and illustrated. Analysis is an increasingly important part of the motivation for large scale cross-cultural surveys. The fundamental challenge for analysis is to discern the real response (or lack of response) to policy change, separating it from the effects of data collection mode, differential non-response, external events, time-in-sample, culture, and language. Two problems relevant to statistical analysis are discussed. The first problem is the question of when and how to analyze pooled data from several countries, in order to strengthen conclusions which might be generally valid. While in some cases this seems to be straightforward, there are differing opinions on the extent to which pooling is possible and reasonable. It is suggested that for formal comparisons, random effects models are of conceptual use. The second problem is to find models of measurement across cultures and data collection modes which will enable calibration of continuous, binary and ordinal responses, and produce comparisons from which extraneous effects have been removed. It is noted that hierarchical models provide a natural way of relaxing requirements of model invariance across groups.

    Release date: 2008-12-23

  • Articles and reports: 12-001-X200800210755
    Description:

    Dependent interviewing (DI) is used in many longitudinal surveys to "feed forward" data from one wave to the next. Though it is a promising technique which has been demonstrated to enhance data quality in certain respects, relatively little is known about how it is actually administered in the field. This research seeks to address this issue through behavior coding. Various styles of DI were employed in the English Longitudinal Study of Ageing (ELSA) in January, 2006, and recordings were made of pilot field interviews. These recordings were analysed to determine whether the questions (particularly the DI aspects) were administered appropriately and to explore the respondent's reaction to the fed-forward data. Of particular interest was whether respondents confirmed or challenged the previously-reported information, whether the prior wave data came into play when respondents were providing their current-wave answers, and how any discrepancies were negotiated by the interviewer and respondent. Also of interest was to examine the effectiveness of various styles of DI. For example, in some cases the prior wave data was brought forward and respondents were asked to explicitly confirm it; in other cases the previous data was read and respondents were asked if the situation was still the same. Results indicate varying levels of compliance in terms of initial question-reading, and suggest that some styles of DI may be more effective than others.

    Release date: 2008-12-23

  • Articles and reports: 12-001-X200800210756
    Description:

    In longitudinal surveys nonresponse often occurs in a pattern that is not monotone. We consider estimation of time-dependent means under the assumption that the nonresponse mechanism is last-value-dependent. Since the last value itself may be missing when nonresponse is nonmonotone, the nonresponse mechanism under consideration is nonignorable. We propose an imputation method by first deriving some regression imputation models according to the nonresponse mechanism and then applying nonparametric regression imputation. We assume that the longitudinal data follow a Markov chain with finite second-order moments. No other assumption is imposed on the joint distribution of longitudinal data and their nonresponse indicators. A bootstrap method is applied for variance estimation. Some simulation results and an example concerning the Current Employment Survey are presented.

    Release date: 2008-12-23

  • Articles and reports: 12-001-X200800210757
    Description:

    Sample weights can be calibrated to reflect the known population totals of a set of auxiliary variables. Predictors of finite population totals calculated using these weights have low bias if these variables are related to the variable of interest, but can have high variance if too many auxiliary variables are used. This article develops an "adaptive calibration" approach, where the auxiliary variables to be used in weighting are selected using sample data. Adaptively calibrated estimators are shown to have lower mean squared error and better coverage properties than non-adaptive estimators in many cases.

    Release date: 2008-12-23

  • Articles and reports: 12-001-X200800210758
    Description:

    We propose a method for estimating the variance of estimators of changes over time, a method that takes account of all the components of these estimators: the sampling design, treatment of non-response, treatment of large companies, correlation of non-response from one wave to another, the effect of using a panel, robustification, and calibration using a ratio estimator. This method, which serves to determine the confidence intervals of changes over time, is then applied to the Swiss survey of value added.

    Release date: 2008-12-23

  • Articles and reports: 12-001-X200800210759
    Description:

    The analysis of stratified multistage sample data requires the use of design information such as stratum and primary sampling unit (PSU) identifiers, or associated replicate weights, in variance estimation. In some public release data files, such design information is masked as an effort to avoid their disclosure risk and yet to allow the user to obtain valid variance estimation. For example, in area surveys with a limited number of PSUs, the original PSUs are split or/and recombined to construct pseudo-PSUs with swapped second or subsequent stage sampling units. Such PSU masking methods, however, obviously distort the clustering structure of the sample design, yielding biased variance estimates possibly with certain systematic patterns between two variance estimates from the unmasked and masked PSU identifiers. Some of the previous work observed patterns in the ratio of the masked and unmasked variance estimates when plotted against the unmasked design effect. This paper investigates the effect of PSU masking on variance estimates under cluster sampling regarding various aspects including the clustering structure and the degree of masking. Also, we seek a PSU masking strategy through swapping of subsequent stage sampling units that helps reduce the resulting biases of the variance estimates. For illustration, we used data from the National Health Interview Survey (NHIS) with some artificial modification. The proposed strategy performs very well in reducing the biases of variance estimates. Both theory and empirical results indicate that the effect of PSU masking on variance estimates is modest with minimal swapping of subsequent stage sampling units. The proposed masking strategy has been applied to the 2003-2004 National Health and Nutrition Examination Survey (NHANES) data release.

    Release date: 2008-12-23

  • Articles and reports: 12-001-X200800210760
    Description:

    The design of a stratified simple random sample without replacement from a finite population deals with two main issues: the definition of a rule to partition the population into strata, and the allocation of sampling units in the selected strata. This article examines a tree-based strategy which plans to approach jointly these issues when the survey is multipurpose and multivariate information, quantitative or qualitative, is available. Strata are formed through a hierarchical divisive algorithm that selects finer and finer partitions by minimizing, at each step, the sample allocation required to achieve the precision levels set for each surveyed variable. In this way, large numbers of constraints can be satisfied without drastically increasing the sample size, and also without discarding variables selected for stratification or diminishing the number of their class intervals. Furthermore, the algorithm tends not to define empty or almost empty strata, thus avoiding the need for strata collapsing aggregations. The procedure was applied to redesign the Italian Farm Structure Survey. The results indicate that the gain in efficiency held using our strategy is nontrivial. For a given sample size, this procedure achieves the required precision by exploiting a number of strata which is usually a very small fraction of the number of strata available when combining all possible classes from any of the covariates.

    Release date: 2008-12-23

  • Articles and reports: 12-001-X200800210761
    Description:

    Optimum stratification is the method of choosing the best boundaries that make strata internally homogeneous, given some sample allocation. In order to make the strata internally homogenous, the strata should be constructed in such a way that the strata variances for the characteristic under study be as small as possible. This could be achieved effectively by having the distribution of the main study variable known and create strata by cutting the range of the distribution at suitable points. If the frequency distribution of the study variable is unknown, it may be approximated from the past experience or some prior knowledge obtained at a recent study. In this paper the problem of finding Optimum Strata Boundaries (OSB) is considered as the problem of determining Optimum Strata Widths (OSW). The problem is formulated as a Mathematical Programming Problem (MPP), which minimizes the variance of the estimated population parameter under Neyman allocation subject to the restriction that sum of the widths of all the strata is equal to the total range of the distribution. The distributions of the study variable are considered as continuous with Triangular and Standard Normal density functions. The formulated MPPs, which turn out to be multistage decision problems, can then be solved using dynamic programming technique proposed by Bühler and Deutler (1975). Numerical examples are presented to illustrate the computational details. The results obtained are also compared with the method of Dalenius and Hodges (1959) with an example of normal distribution.

    Release date: 2008-12-23

  • Articles and reports: 12-001-X200800210762
    Description:

    This paper considers the optimum allocation in multivariate stratified sampling as a nonlinear matrix optimisation of integers. As a particular case, a nonlinear problem of the multi-objective optimisation of integers is studied. A full detailed example including some of proposed techniques is provided at the end of the work.

    Release date: 2008-12-23

  • Articles and reports: 12-001-X200800210763
    Description:

    The present work illustrates a sampling strategy useful for obtaining planned sample size for domains belonging to different partitions of the population and in order to guarantee the sampling errors of domain estimates be lower than given thresholds. The sampling strategy that covers the multivariate multi-domain case is useful when the overall sample size is bounded and consequently the standard solution of using a stratified sample with the strata given by cross-classification of variables defining the different partitions is not feasible since the number of strata is larger than the overall sample size. The proposed sampling strategy is based on the use of balanced sampling selection technique and on a GREG-type estimation. The main advantages of the solution is the computational feasibility which allows one to easily implement an overall small area strategy considering jointly the sampling design and the estimator and improving the efficiency of the direct domain estimators. An empirical simulation on real population data and different domain estimators shows the empirical properties of the examined sample strategy.

    Release date: 2008-12-23
Data (370)

Data (370) (10 to 20 of 370 results)

Analysis (394)

Analysis (394) (390 to 400 of 394 results)

  • Articles and reports: 12-001-X200700210496
    Description:

    The European Community Household Panel (ECHP) is a panel survey covering a wide range of topics regarding economic, social and living conditions. In particular, it makes it possible to calculate disposable equivalized household income, which is a key variable in the study of economic inequity and poverty. To obtain reliable estimates of the average of this variable for regions within countries it is necessary to have recourse to small area estimation methods. In this paper, we focus on empirical best linear predictors of the average equivalized income based on "unit level models" borrowing strength across both areas and times. Using a simulation study based on ECHP data, we compare the suggested estimators with cross-sectional model-based and design-based estimators. In the case of these empirical predictors, we also compare three different MSE estimators. Results show that those estimators connected to models that take units' autocorrelation into account lead to a significant gain in efficiency, even when there are no covariates available whose population mean is known.

    Release date: 2008-01-03

  • Articles and reports: 12-001-X200700210497
    Description:

    Coverage deficiencies are estimated and analysed for the 2000 population census in Switzerland. For the undercoverage component, the estimation is based on a sample independent of the census and a match with the census. For the overcoverage component, the estimation is based on a sample drawn from the census list and a match with the rest of the census. The over- and undercoverage components are then combined to obtain an estimate of the resulting net coverage. This estimate is based on a capture-recapture model, named the dual system, combined with a synthetic model. The estimators are calculated for the full population and different subgroups, with a variance estimated by a stratified jackknife. The coverage analyses are supplemented by a study of matches between the independent sample and the census in order to determine potential errors of measurement and location in the census data.

    Release date: 2008-01-03

  • Articles and reports: 12-001-X200700210498
    Description:

    In this paper we describe a methodology for combining a convenience sample with a probability sample in order to produce an estimator with a smaller mean squared error (MSE) than estimators based on only the probability sample. We then explore the properties of the resulting composite estimator, a linear combination of the convenience and probability sample estimators with weights that are a function of bias. We discuss the estimator's properties in the context of web-based convenience sampling. Our analysis demonstrates that the use of a convenience sample to supplement a probability sample for improvements in the MSE of estimation may be practical only under limited circumstances. First, the remaining bias of the estimator based on the convenience sample must be quite small, equivalent to no more than 0.1 of the outcome's population standard deviation. For a dichotomous outcome, this implies a bias of no more than five percentage points at 50 percent prevalence and no more than three percentage points at 10 percent prevalence. Second, the probability sample should contain at least 1,000-10,000 observations for adequate estimation of the bias of the convenience sample estimator. Third, it must be inexpensive and feasible to collect at least thousands (and probably tens of thousands) of web-based convenience observations. The conclusions about the limited usefulness of convenience samples with estimator bias of more than 0.1 standard deviations also apply to direct use of estimators based on that sample.

    Release date: 2008-01-03

  • Articles and reports: 12-001-X200700210499
    Description:

    In this Issue is a column where the Editor biefly presents each paper of the current issue of Survey Methodology. As well, it sometimes contain informations on structure or management changes in the journal.

    Release date: 2008-01-03
Reference (54)

Reference (54) (0 to 10 of 54 results)

  • Surveys and statistical programs – Documentation: 62F0026M2009001
    Geography: Province or territory
    Description:

    This guide presents information of interest to users of data from the Survey of Household Spending, which gathers information on the spending habits, dwelling characteristics and household equipment of Canadian households. The survey covers private households in the 10 provinces. (The territories are surveyed every second year, starting in 1999.)

    This guide includes definitions of survey terms and variables, as well as descriptions of survey methodology and data quality. One section describes the various statistics that can be created using expenditure data (e.g., budget share, market share, aggregates and medians)

    Release date: 2008-12-22

  • Surveys and statistical programs – Documentation: 97-563-G2006003
    Description:

    This guide focuses on the following variables: After-tax income, Total income and its components, Income status as well as other related variables from the Income and earnings release.

    Provides information that enables users to effectively use, apply and interpret data from the 2006 Census. Each guide contains definitions and explanations on census concepts, data quality and historical comparability. Additional information will be included for specific variables to help general users better understand the concepts and questions used in the census.

    Release date: 2008-12-04

  • Surveys and statistical programs – Documentation: 97-563-G
    Description:

    This guide focuses on the following variables: After-tax income, Total income and its components, Income status as well as other related variables from the Income and earnings release.

    Provides information that enables users to effectively use, apply and interpret data from the 2006 Census. Each guide contains definitions and explanations on census concepts, data quality and historical comparability. Additional information will be included for specific variables to help general users better understand the concepts and questions used in the census.

    Release date: 2008-12-04

  • Surveys and statistical programs – Documentation: 89-634-X2008006
    Description:

    This guide is intended to help data users understand the concepts and methods used in the 2006 Aboriginal Children's Survey (ACS), which was conducted from October 2006 to March 2007.

    Technical details on sampling, processing and data quality are included in this guide. Further, the guide explains the relationship between the ACS and the 2006 Census and cautions users as to important differences in the data produced from these two sources. Appendix 1 contains a glossary of terms that relate to the ACS. Answers to some frequently asked questions are provided in Appendix 2. Links to the 2006 ACS questionnaires are found in Appendix 3.

    Release date: 2008-11-18

  • Surveys and statistical programs – Documentation: 92-445-X
    Description:

    This kit provides teachers with innovative classroom materials that make use of the results of the 2006 Census. Activities are available for intermediate and secondary schools, are classroom-ready, and have been classroom-tested by professional educators. Minimal preparation time is required.

    Activities are grouped according to subject, using census terminology.

    The 2006 Census Teacher's Kit activities are appropriate for the following subjects: English, Mathematics, Social Sciences, Geography, History, Family Studies and Informatics. Suggested grade levels are indicated on each activity and all necessary tables, charts, graphs and data are included.

    Release date: 2008-11-14

  • Notices and consultations: 92-138-X
    Description:

    With each census, Statistics Canada improves its methods of dissemination to the public by seeking ways of publishing census results in a timely and accessible manner, while maintaining high data quality standards.

    This consultation guide has been developed to assist you in providing feedback on 2006 Census products and services and in contributing ideas and suggestions to the 2011 Census dissemination strategy.

    Release date: 2008-11-05

  • Surveys and statistical programs – Documentation: 75F0002M199201A
    Description:

    Starting in 1994, the Survey of Labour and Income Dynamics (SLID) will follow individuals and families for at least six years, tracking their labour market experiences, changes in income and family circumstances. An initial proposal for the content of SLID, entitled Content of the Survey of Labour and Income Dynamics : Discussion Paper, was distributed in February 1992.

    That paper served as a background document for consultation wit h interested users. The content underwent significant change during this process. Based upon the revised content, a large-scale test of SLID will be conducted in February and May 1993.

    This document outlines the current demographic and labour content, leading into the test.

    Release date: 2008-10-21

  • Surveys and statistical programs – Documentation: 12-589-X
    Description:

    This free publication presents the concepts and criteria utilized to determine the entities that comprise the public sector of Canada.

    The resulting statistical universe provides the framework to observe the extent of governments' involvement in the production of goods and services and the associated resource allocation process in the Canadian economy.

    The concepts and criteria contained in the guide are consistent with two internationally accepted classification standards: the System of National Accounts (SNA 2008) guide; and the International Monetary Fund (IMF) Government Finance Statistics Manual 2001.

    As well, the guide delineates the various public sector components that are used in compiling and aggregating public sector data. This structure also enables comparisons of Canadian government finance data with international macroeconomic statistical systems.

    Release date: 2008-09-26

  • Surveys and statistical programs – Documentation: 82-582-X
    Description:

    This special methodological paper will help readers understand and assess reports that rank the health status or health system performance of a country, province or jurisdiction. The report outlines the components and processes that underlie health rankings, explores why such rankings can be difficult to interpret and includes a plain-language checklist to use as a critical evaluative resource when reading health-ranking reports.

    Release date: 2008-09-16

  • Surveys and statistical programs – Documentation: 75-512-X
    Description:

    This book provides technical documentation of variables, methodologies and extended lists of references used in developing the research findings reported in "New Frontiers of Research on Retirement". It will be used around the world by researchers and teachers, as well as by students preparing theses related to patterns of transition to retirement. This documentation is important because a large part of book is devoted to scientific papers that are based upon Statistics Canada's data and which require substantial innovations of useful concepts and data.

    Release date: 2008-09-08
Date modified: