Response and nonresponse

Filter results by

Search Help
Currently selected filters that can be removed

Keyword(s)

Geography

1 facets displayed. 0 facets selected.

Content

1 facets displayed. 0 facets selected.
Sort Help
entries

Results

All (141)

All (141) (20 to 30 of 141 results)

  • Articles and reports: 12-001-X201800254952
    Description:

    Panel surveys are frequently used to measure the evolution of parameters over time. Panel samples may suffer from different types of unit non-response, which is currently handled by estimating the response probabilities and by reweighting respondents. In this work, we consider estimation and variance estimation under unit non-response for panel surveys. Extending the work by Kim and Kim (2007) for several times, we consider a propensity score adjusted estimator accounting for initial non-response and attrition, and propose a suitable variance estimator. It is then extended to cover most estimators encountered in surveys, including calibrated estimators, complex parameters and longitudinal estimators. The properties of the proposed variance estimator and of a simplified variance estimator are estimated through a simulation study. An illustration of the proposed methods on data from the ELFE survey is also presented.

    Release date: 2018-12-20

  • Articles and reports: 12-001-X201800254957
    Description:

    When a linear imputation method is used to correct non-response based on certain assumptions, total variance can be assigned to non-responding units. Linear imputation is not as limited as it seems, given that the most common methods – ratio, donor, mean and auxiliary value imputation – are all linear imputation methods. We will discuss the inference framework and the unit-level decomposition of variance due to non-response. Simulation results will also be presented. This decomposition can be used to prioritize non-response follow-up or manual corrections, or simply to guide data analysis.

    Release date: 2018-12-20

  • Articles and reports: 12-001-X201800154929
    Description:

    The U.S. Census Bureau is investigating nonrespondent subsampling strategies for usage in the 2017 Economic Census. Design constraints include a mandated lower bound on the unit response rate, along with targeted industry-specific response rates. This paper presents research on allocation procedures for subsampling nonrespondents, conditional on the subsampling being systematic. We consider two approaches: (1) equal-probability sampling and (2) optimized allocation with constraints on unit response rates and sample size with the objective of selecting larger samples in industries that have initially lower response rates. We present a simulation study that examines the relative bias and mean squared error for the proposed allocations, assessing each procedure’s sensitivity to the size of the subsample, the response propensities, and the estimation procedure.

    Release date: 2018-06-21

  • Articles and reports: 12-001-X201700114820
    Description:

    Measurement errors can induce bias in the estimation of transitions, leading to erroneous conclusions about labour market dynamics. Traditional literature on gross flows estimation is based on the assumption that measurement errors are uncorrelated over time. This assumption is not realistic in many contexts, because of survey design and data collection strategies. In this work, we use a model-based approach to correct observed gross flows from classification errors with latent class Markov models. We refer to data collected with the Italian Continuous Labour Force Survey, which is cross-sectional, quarterly, with a 2-2-2 rotating design. The questionnaire allows us to use multiple indicators of labour force conditions for each quarter: two collected in the first interview, and a third collected one year later. Our approach provides a method to estimate labour market mobility, taking into account correlated errors and the rotating design of the survey. The best-fitting model is a mixed latent class Markov model with covariates affecting latent transitions and correlated errors among indicators; the mixture components are of mover-stayer type. The better fit of the mixture specification is due to more accurately estimated latent transitions.

    Release date: 2017-06-22

  • Articles and reports: 12-001-X201600214661
    Description:

    An example presented by Jean-Claude Deville in 2005 is subjected to three estimation methods: the method of moments, the maximum likelihood method, and generalized calibration. The three methods yield exactly the same results for the two non-response models. A discussion follows on how to choose the most appropriate model.

    Release date: 2016-12-20

  • Articles and reports: 12-001-X201600214677
    Description:

    How do we tell whether weighting adjustments reduce nonresponse bias? If a variable is measured for everyone in the selected sample, then the design weights can be used to calculate an approximately unbiased estimate of the population mean or total for that variable. A second estimate of the population mean or total can be calculated using the survey respondents only, with weights that have been adjusted for nonresponse. If the two estimates disagree, then there is evidence that the weight adjustments may not have removed the nonresponse bias for that variable. In this paper we develop the theoretical properties of linearization and jackknife variance estimators for evaluating the bias of an estimated population mean or total by comparing estimates calculated from overlapping subsets of the same data with different sets of weights, when poststratification or inverse propensity weighting is used for the nonresponse adjustments to the weights. We provide sufficient conditions on the population, sample, and response mechanism for the variance estimators to be consistent, and demonstrate their small-sample properties through a simulation study.

    Release date: 2016-12-20

  • Articles and reports: 12-001-X201500114172
    Description:

    When a random sample drawn from a complete list frame suffers from unit nonresponse, calibration weighting to population totals can be used to remove nonresponse bias under either an assumed response (selection) or an assumed prediction (outcome) model. Calibration weighting in this way can not only provide double protection against nonresponse bias, it can also decrease variance. By employing a simple trick one can estimate the variance under the assumed prediction model and the mean squared error under the combination of an assumed response model and the probability-sampling mechanism simultaneously. Unfortunately, there is a practical limitation on what response model can be assumed when design weights are calibrated to population totals in a single step. In particular, the choice for the response function cannot always be logistic. That limitation does not hinder calibration weighting when performed in two steps: from the respondent sample to the full sample to remove the response bias and then from the full sample to the population to decrease variance. There are potential efficiency advantages from using the two-step approach as well even when the calibration variables employed in each step is a subset of the calibration variables in the single step. Simultaneous mean-squared-error estimation using linearization is possible, but more complicated than when calibrating in a single step.

    Release date: 2015-06-29

  • Articles and reports: 12-001-X201500114173
    Description:

    Nonresponse is present in almost all surveys and can severely bias estimates. It is usually distinguished between unit and item nonresponse. By noting that for a particular survey variable, we just have observed and unobserved values, in this work we exploit the connection between unit and item nonresponse. In particular, we assume that the factors that drive unit response are the same as those that drive item response on selected variables of interest. Response probabilities are then estimated using a latent covariate that measures the will to respond to the survey and that can explain a part of the unknown behavior of a unit to participate in the survey. This latent covariate is estimated using latent trait models. This approach is particularly relevant for sensitive items and, therefore, can handle non-ignorable nonresponse. Auxiliary information known for both respondents and nonrespondents can be included either in the latent variable model or in the response probability estimation process. The approach can also be used when auxiliary information is not available, and we focus here on this case. We propose an estimator using a reweighting system based on the previous latent covariate when no other observed auxiliary information is available. Results on its performance are encouraging from simulation studies on both real and simulated data.

    Release date: 2015-06-29

  • Articles and reports: 11-522-X201300014262
    Description:

    Measurement error is one source of bias in statistical analysis. However, its possible implications are mostly ignored One class of models that can be especially affected by measurement error are fixed-effects models. By validating the survey response of five panel survey waves for welfare receipt with register data, the size and form of longitudinal measurement error can be determined. It is shown, that the measurement error for welfare receipt is serially correlated and non-differential. However, when estimating the coefficients of longitudinal fixed effect models of welfare receipt on subjective health for men and women, the coefficients are biased only for the male subpopulation.

    Release date: 2014-10-31

  • Articles and reports: 11-522-X201300014263
    Description:

    Collecting information from sampled units over the Internet or by mail is much more cost-efficient than conducting interviews. These methods make self-enumeration an attractive data-collection method for surveys and censuses. Despite the benefits associated with self-enumeration data collection, in particular Internet-based data collection, self-enumeration can produce low response rates compared with interviews. To increase response rates, nonrespondents are subject to a mixed mode of follow-up treatments, which influence the resulting probability of response, to encourage them to participate. Factors and interactions are commonly used in regression analyses, and have important implications for the interpretation of statistical models. Because response occurrence is intrinsically conditional, we first record response occurrence in discrete intervals, and we characterize the probability of response by a discrete time hazard. This approach facilitates examining when a response is most likely to occur and how the probability of responding varies over time. The nonresponse bias can be avoided by multiplying the sampling weight of respondents by the inverse of an estimate of the response probability. Estimators for model parameters as well as for finite population parameters are given. Simulation results on the performance of the proposed estimators are also presented.

    Release date: 2014-10-31
Data (0)

Data (0) (0 results)

No content available at this time.

Analysis (140)

Analysis (140) (100 to 110 of 140 results)

  • Articles and reports: 11-522-X20010016275
    Description:

    This paper discusses in detail issues dealing with the technical aspects of designing and conducting surveys. It is intended for an audience of survey methodologists.

    Hot deck imputation, in which missing items are replaced with values from respondents, is often used in survey sampling. A model supporting such procedures is the model in which response probabilities are assumed equal within imputation cells. In this paper, an efficient version of hot deck imputation is described, as are the variance of the efficient version derived under the cell response model and an approximation to the fully efficient procedure in which a small number of values are imputed for each non-respondent, respectively. Variance estimation procedures are presented and illustrated in a Monte Carlo study.

    Release date: 2002-09-12

  • Articles and reports: 11-522-X20010016280
    Description:

    This paper discusses in detail issues dealing with the technical aspects of designing and conducting surveys. It is intended for an audience of survey methodologists.

    Survey response rates serve as one key measure of the quality of a data set. However, they are only useful to a statistical agency in the evaluation of ongoing data collections if they are based on a predefined set of formulas and definitions that are uniformly applied across all data collections.

    In anticipation of a revision of the current National Center for Education Statistics (NCES) statistical standards, several agency-wide audits of statistical practices were undertaken in the late 1990s. In particular, a compendium documenting major survey design parameters of NCES surveys was drafted. Related to this, NCES conducted a targeted audit of the consistency in response rate calculations across these surveys.

    Although NCES has had written statistical standards since 1988, the audit of the reported response rates from 50 survey components in 14 NCES surveys revealed considerable variability in procedures used to calculate response rates. During the course of the response rate audit, the Statistical Standards Program staff concluded that the organization of the 1992 Standards made it difficult to find all of the information associated with response rates in the standards. In fact, there are references to response rate in a number of separate standards scattered throughout the 1992 Statistical Standards.

    Release date: 2002-09-12

  • Articles and reports: 11-522-X20010016297
    Description:

    This paper discusses in detail issues dealing with the technical aspects in designing and conducting surveys. It is intended for an audience of survey methodologists. The Danish National Institute of Social Research is an independent institution under the Ministry of Social Affairs. The Institute carries out surveys on social issues on encompassing a broad range of subjects. The Sustainable Financing Initiative Survey (SFI-SURVEY) is an economically independent section within the institute. SFI-SURVEY carries out scientific surveys both for the Institute, for other public organizations, and for the private sector as well. The SFI-SURVEY interviewer body has 450 interviewers spread throughout Denmark. There are five supervisors, each with a regional office, who are in contact with the interviewer body. On a yearly basis, SFI-SURVEY conducts 40 surveys. The average sample size (gross) is 1,000 persons. The average response rate is 75%. Since January 1999, the following information about the surveys have been recorded: · Type of method used (face-to-face or telephone) · Length of questionnaire (interviewing time in minutes) · Whether or not a folder was sent to the respondents in advance · Whether or not an interviewer instruction meeting was given · Number of interviews per interviewer per week · Whether or not the subject of the survey was of interest to the respondents · Interviewing month · Target group (random selection of the total population or special groups)

    Release date: 2002-09-12

  • Articles and reports: 12-001-X20000025532
    Description:

    When a survey response mechanism depends on a variable of interest measured within the same survey and observed for only part of the sample, the situation is one of nonignorable nonresponse. In such a situation, ignoring the nonresponse can generate significant bias in the estimation of a mean or of a total. To solve this problem, one option is the joint modeling of the response mechanism and the variable of interest, followed by estimation using the maximum likelihood method. The main criticism levelled at this method is that estimation using the maximum likelihood method is based on the hypothesis of error normality for the model involving the variable of interest, and this hypothesis is difficult to verify. In this paper, the author proposes an estimation method that is robust to the hypothesis of normality, so constructed that there is no need to specify the distribution of errors. The method is evaluated using Monte Carlo simulations. The author also proposes a simple method of verifying the validity of the hypothesis of error normality whenever nonresponse is not ignorable.

    Release date: 2001-02-28

  • Articles and reports: 82-003-X20000015300
    Geography: Canada
    Description:

    This article examines the extent of proxy reporting in the Natiional Population Health (NPHS). It also explores associations between proxy reporting status and the prevalence of selected health problems, and investigates the relationship between changes in proxy reporting status and two-year incidence of health problems.

    Release date: 2000-10-20

  • Articles and reports: 12-001-X20000015183
    Description:

    For surveys which involve more than one stage of data collection, one method recommended for adjusting weights for nonresponse (after the first stage of data collection) entails utilizing auxiliary variables (from previous stages of data collection) which are identified as predictors of nonresponse.

    Release date: 2000-08-30

  • Articles and reports: 12-001-X19980024349
    Description:

    Measurement of gross flows in labour force status is an important objective of the continuing labour force surveys carried out by many national statistics agencies. However, it is well known that estimation of these flows can be complicated by nonresponse, measurement errors, sample rotation and complex design effects. Motivated by nonresponse patterns in household-based surveys, this paper focuses on estimation of labour force gross flows, while simultaneously adjusting for nonignorable nonresponse. Previous model-based approaches to gross flows estimation have assumed nonresponse to be an individual-level process. We propose a class of models that allow for nonignorable household-level nonresponse. A simulation study is used to show, that individual-level labour force gross flows estimates from household-based survey data, may be biased and that estimates using household-level models can offer a reduction in this bias.

    Release date: 1999-01-14

  • Articles and reports: 12-001-X19980024352
    Description:

    The National Population Health Survey (NPHS) is one of Statistics Canada's three major longitudinal household surveys providing an extensive coverage of the Canadian population. A panel of approximately 17,000 people are being followed up every two years for up to twenty years. The survey data are used for longitudinal analyses, although an important objective is the production of cross-sectional estimates. Each cycle panel respondents provide detailed health information (H) while, to augment the cross-sectional sample, general socio-demographic and health information (G) are collected from all members of their households. This particular collection strategy presents several observable response patterns for Panel Members after two cycles: GH-GH, GH-G*, GH-**, G*-GH, G*-G* and G*-**, where "*" denotes a missing portion of data. The article presents the methodology developed to deal with these types of longitudinal nonresponse as well as with nonresponse from a cross-sectional perspective. The use of weight adjustments for nonresponse and the creation of adjustment cells for weighting using a CHAID algorithm are discussed.

    Release date: 1999-01-14

  • Articles and reports: 12-001-X19970013103
    Description:

    This paper discusses the use of some simple diagnostics to guide the formation of nonresponse adjustment cells. Following Little (1986), we consider construction of adjustment cells by grouping sample units according to their estimated response probabilities or estimated survey items. Four issues receive principal attention: assessment of the sensitivity of adjusted mean estimates to changes in k, the number of cells used; identification of specific cells that require additional refinement; comparison of adjusted and unadjusted mean estimates; and comparison of estimation results from estimated-probability and estimated-item based cells. The proposed methods are motivated and illustrated with an application involving estimation of mean consumer unit income from the U.S. Consumer Expenditure Survey.

    Release date: 1997-08-18

  • Articles and reports: 12-001-X199600114386
    Description:

    In some surveys, many auxiliary variables are available for respondents and nonrespondents for use in nonresponse adjustment. One decision that arises is how to select which of the auxiliary variables should be used for this purpose and another decision involves how the selected variables should be used. Several approaches to forming weighting adjustments for nonresponse are considered in this research. The methods include those based on logistic regression models, categorical search algorithms, and generalized raking. These methods are applied to adjust for panel nonresponse in the Survey of Income and Program Participation (SIPP). The estimates from the alternative adjustments are assessed by comparing them to one another and to benchmark estimates from other sources.

    Release date: 1996-06-14
Reference (1)

Reference (1) ((1 result))

  • Surveys and statistical programs – Documentation: 75-005-M2023001
    Description: This document provides information on the evolution of response rates for the Labour Force Survey (LFS) and a discussion of the evaluation of two aspects of data quality that ensure the LFS estimates continue providing an accurate portrait of the Canadian labour market.
    Release date: 2023-10-30
Date modified: