Response and nonresponse
Filter results by
Search HelpKeyword(s)
Type
Survey or statistical program
Results
All (141)
All (141) (40 to 50 of 141 results)
- 41. Linearization variance estimation for generalized raking estimators in the presence of nonresponse ArchivedArticles and reports: 12-001-X201000211380Description:
Alternative forms of linearization variance estimators for generalized raking estimators are defined via different choices of the weights applied (a) to residuals and (b) to the estimated regression coefficients used in calculating the residuals. Some theory is presented for three forms of generalized raking estimator, the classical raking ratio estimator, the 'maximum likelihood' raking estimator and the generalized regression estimator, and for associated linearization variance estimators. A simulation study is undertaken, based upon a labour force survey and an income and expenditure survey. Properties of the estimators are assessed with respect to both sampling and nonresponse. The study displays little difference between the properties of the alternative raking estimators for a given sampling scheme and nonresponse model. Amongst the variance estimators, the approach which weights residuals by the design weight can be severely biased in the presence of nonresponse. The approach which weights residuals by the calibrated weight tends to display much less bias. Varying the choice of the weights used to construct the regression coefficients has little impact.
Release date: 2010-12-21 - 42. Respondent differences and length of data collection in the Behavioral Risk Factor Surveillance System ArchivedArticles and reports: 12-001-X201000211384Description:
The current economic downturn in the US could challenge costly strategies in survey operations. In the Behavioral Risk Factor Surveillance System (BRFSS), ending the monthly data collection at 31 days could be a less costly alternative. However, this could potentially exclude a portion of interviews completed after 31 days (late responders) whose respondent characteristics could be different in many respects from those who completed the survey within 31 days (early responders). We examined whether there are differences between the early and late responders in demographics, health-care coverage, general health status, health risk behaviors, and chronic disease conditions or illnesses. We used 2007 BRFSS data, where a representative sample of the noninstitutionalized adult U.S. population was selected using a random digit dialing method. Late responders were significantly more likely to be male; to report race/ethnicity as Hispanic; to have annual income higher than $50,000; to be younger than 45 years of age; to have less than high school education; to have health-care coverage; to be significantly more likely to report good health; and to be significantly less likely to report hypertension, diabetes, or being obese. The observed differences between early and late responders on survey estimates may hardly influence national and state-level estimates. As the proportion of late responders may increase in the future, its impact on surveillance estimates should be examined before excluding from the analysis. Analysis on late responders only should combine several years of data to produce reliable estimates.
Release date: 2010-12-21 - 43. Examining survey participation and response quality: The significance of topic salience and incentives ArchivedArticles and reports: 12-001-X201000111252Description:
Nonresponse bias has been a long-standing issue in survey research (Brehm 1993; Dillman, Eltinge, Groves and Little 2002), with numerous studies seeking to identify factors that affect both item and unit response. To contribute to the broader goal of minimizing survey nonresponse, this study considers several factors that can impact survey nonresponse, using a 2007 Animal Welfare Survey Conducted in Ohio, USA. In particular, the paper examines the extent to which topic salience and incentives affect survey participation and item nonresponse, drawing on the leverage-saliency theory (Groves, Singer and Corning 2000). We find that participation in a survey is affected by its subject context (as this exerts either positive or negative leverage on sampled units) and prepaid incentives, which is consistent with the leverage-saliency theory. Our expectations are also confirmed by the finding that item nonresponse, our proxy for response quality, does vary by proximity to agriculture and the environment (residential location, knowledge about how food is grown, and views about the importance of animal welfare). However, the data suggests that item nonresponse does not vary according to whether or not a respondent received incentives.
Release date: 2010-06-29 - 44. A standardization of randomized response strategies ArchivedArticles and reports: 12-001-X200900211037Description:
Randomized response strategies, which have originally been developed as statistical methods to reduce nonresponse as well as untruthful answering, can also be applied in the field of statistical disclosure control for public use microdata files. In this paper a standardization of randomized response techniques for the estimation of proportions of identifying or sensitive attributes is presented. The statistical properties of the standardized estimator are derived for general probability sampling. In order to analyse the effect of different choices of the method's implicit "design parameters" on the performance of the estimator we have to include measures of privacy protection in our considerations. These yield variance-optimum design parameters given a certain level of privacy protection. To this end the variables have to be classified into different categories of sensitivity. A real-data example applies the technique in a survey on academic cheating behaviour.
Release date: 2009-12-23 - 45. Treatments for link nonresponse in indirect sampling ArchivedArticles and reports: 12-001-X200900211038Description:
We examine overcoming the overestimation in using generalized weight share method (GWSM) caused by link nonresponse in indirect sampling. A few adjustment methods incorporating link nonresponse in using GWSM have been constructed for situations both with and without the availability of auxiliary variables. A simulation study on a longitudinal survey is presented using some of the adjustment methods we recommend. The simulation results show that these adjusted GWSMs perform well in reducing both estimation bias and variance. The advancement in bias reduction is significant.
Release date: 2009-12-23 - 46. Nonparametric propensity weighting for survey nonresponse through local polynomial regression ArchivedArticles and reports: 12-001-X200900211039Description:
Propensity weighting is a procedure to adjust for unit nonresponse in surveys. A form of implementing this procedure consists of dividing the sampling weights by estimates of the probabilities that the sampled units respond to the survey. Typically, these estimates are obtained by fitting parametric models, such as logistic regression. The resulting adjusted estimators may become biased when the specified parametric models are incorrect. To avoid misspecifying such a model, we consider nonparametric estimation of the response probabilities by local polynomial regression. We study the asymptotic properties of the resulting estimator under quasi-randomization. The practical behavior of the proposed nonresponse adjustment approach is evaluated on NHANES data.
Release date: 2009-12-23 - Articles and reports: 12-001-X200900211043Description:
Business surveys often use a one-stage stratified simple random sampling without replacement design with some certainty strata. Although weight adjustment is typically applied for unit nonresponse, the variability due to nonresponse may be omitted in practice when estimating variances. This is problematic especially when there are certainty strata. We derive some variance estimators that are consistent when the number of sampled units in each weighting cell is large, using the jackknife, linearization, and modified jackknife methods. The derived variance estimators are first applied to empirical data from the Annual Capital Expenditures Survey conducted by the U.S. Census Bureau and are then examined in a simulation study.
Release date: 2009-12-23 - 48. Selection models for evaluating assumptions of methods that compensate for missing values in sample surveys ArchivedArticles and reports: 11-522-X200800010951Description:
Missing values caused by item nonresponse represent one type of non-sampling error that occurs in surveys. When cases with missing values are discarded in statistical analyses estimates may be biased because of differences between responders with missing values and responders that do not have missing values. Also, when variables in the data have different patterns of missingness among sampled cases, and cases with missing values are discarded in statistical analyses, those analyses may yield inconsistent results because they are based on different subsets of sampled cases that may not be comparable. However, analyses that discard cases with missing values may be valid provided those values are missing completely at random (MCAR). Are those missing values MCAR?
To compensate, missing values are often imputed or survey weights are adjusted using weighting class methods. Subsequent analyses based on those compensations may be valid provided that missing values are missing at random (MAR) within each of the categorizations of the data implied by the independent variables of the models that underlie those adjustment approaches. Are those missing values MAR?
Because missing values are not observed, MCAR and MAR assumptions made by statistical analyses are infrequently examined. This paper describes a selection model from which statistical significance tests for the MCAR and MAR assumptions can be examined although the missing values are not observed. Data from the National Immunization Survey conducted by the U.S. Department of Health and Human Services are used to illustrate the methods.
Release date: 2009-12-03 - 49. The effect of non-response follow-up in a Survey on Living Conditions among Immigrants in Norway ArchivedArticles and reports: 11-522-X200800010952Description:
In a survey where results were estimated by simple averages, we will compare the effect on the results of a follow-up among non-respondents, and weighting based on the last ten percents of the respondents. The data used are collected from a Survey of Living Conditions among Immigrants in Norway that was carried out in 2006.
Release date: 2009-12-03 - 50. Nonresponse bias analysis using reluctant respondents who responded after receiving monetary incentives ArchivedArticles and reports: 11-522-X200800010953Description:
As survey researchers attempt to maintain traditionally high response rates, reluctant respondents have resulted in increasing data collection costs. This respondent reluctance may be related to the amount of time it takes to complete an interview in large-scale, multi-purpose surveys, such as the National Survey of Recent College Graduates (NSRCG). Recognizing that respondent burden or questionnaire length may contribute to lower response rates, in 2003, following several months of data collection under the standard data collection protocol, the NSRCG offered its nonrespondents monetary incentives about two months before the end of the data collection,. In conjunction with the incentive offer, the NSRCG also offered persistent nonrespondents an opportunity to complete a much-abbreviated interview consisting of a few critical items. The late respondents who completed the interviews as the result of the incentive and critical items-only questionnaire offers may provide some insight into the issue of nonresponse bias and the likelihood that such interviewees would have remained survey nonrespondents if these refusal conversion efforts had not been made.
In this paper, we define "reluctant respondents" as those who responded to the survey only after extra efforts were made beyond the ones initially planned in the standard data collection protocol. Specifically, reluctant respondents in the 2003 NSRCG are those who responded to the regular or shortened questionnaire following the incentive offer. Our conjecture was that the behavior of the reluctant respondents would be more like that of nonrespondents than of respondents to the surveys. This paper describes an investigation of reluctant respondents and the extent to which they are different from regular respondents. We compare different response groups on several key survey estimates. This comparison will expand our understanding of nonresponse bias in the NSRCG, and of the characteristics of nonrespondents themselves, thus providing a basis for changes in the NSRCG weighting system or estimation procedures in the future.
Release date: 2009-12-03
- Previous Go to previous page of All results
- 1 Go to page 1 of All results
- 2 Go to page 2 of All results
- 3 Go to page 3 of All results
- 4 Go to page 4 of All results
- 5 (current) Go to page 5 of All results
- 6 Go to page 6 of All results
- 7 Go to page 7 of All results
- ...
- 15 Go to page 15 of All results
- Next Go to next page of All results
Data (0)
Data (0) (0 results)
No content available at this time.
Analysis (140)
Analysis (140) (50 to 60 of 140 results)
- 51. Evaluation and treatment of non-response in the ELFE cohort: Results of the pilot studies ArchivedArticles and reports: 11-522-X200800010960Description:
Non-response is inevitable in any survey, despite all the effort put into reducing it at the various stages of the survey. In particular, non-response can cause bias in the estimates. In addition, non-response is an especially serious problem in longitudinal studies because the sample shrinks over time. France's ELFE (Étude Longitudinale Française depuis l'Enfance) is a project that aims to track 20,000 children from birth to adulthood using a multidisciplinary approach. This paper is based on the results of the initial pilot studies conducted in 2007 to test the survey's feasibility and acceptance. The participation rates are presented (response rate, non-response factors) along with a preliminary description of the non-response treatment methods being considered.
Release date: 2009-12-03 - Articles and reports: 11-522-X200800010975Description:
A major issue in official statistics is the availability of objective measures supporting the based-on-fact decision process. Istat has developed an Information System to assess survey quality. Among other standard quality indicators, nonresponse rates are systematically computed and stored for all surveys. Such a rich information base permits analysis over time and comparisons among surveys. The paper focuses on the analysis of interrelationships between data collection mode and other survey characteristics on total nonresponse. Particular attention is devoted to the extent to which multi-mode data collection improves response rates.
Release date: 2009-12-03 - Articles and reports: 11-522-X200800010976Description:
Many survey organizations use the response rate as an indicator for the quality of survey data. As a consequence, a variety of measures are implemented to reduce non-response or to maintain response at an acceptable level. However, the response rate is not necessarily a good indicator of non-response bias. A higher response rate does not imply smaller non-response bias. What matters is how the composition of the response differs from the composition of the sample as a whole. This paper describes the concept of R-indicators to assess potential differences between the sample and the response. Such indicators may facilitate analysis of survey response over time, between various fieldwork strategies or data collection modes. Some practical examples are given.
Release date: 2009-12-03 - Articles and reports: 11-522-X200800010983Description:
The US Census Bureau conducts monthly, quarterly, and annual surveys of the American economy and a census every 5 years. These programs require significant business effort. New technologies, new forms of organization, and scarce resources affect the ability of businesses to respond. Changes also affect what businesses expect from the Census Bureau, the Census Bureau's internal systems, and the way businesses interact with the Census Bureau.
For several years, the Census Bureau has provided a special relationship to help large companies prepare for the census. We also have worked toward company-centric communication across all programs. A relationship model has emerged that focuses on infrastructure and business practices, and allows the Census Bureau to be more responsive.
This paper focuses on the Census Bureau's company-centric communications and systems. We describe important initiatives and challenges, and we review their impact on Census Bureau practices and respondent behavior.
Release date: 2009-12-03 - Articles and reports: 11-522-X200800010984Description:
The Enterprise Portfolio Manager (EPM) Program at Statistics Canada demonstrated the value of employing a "holistic" approach to managing the relationships we have with our largest and most complex business respondents.
Understanding that different types of respondents should receive different levels of intervention and having learnt the value of employing an "enterprise-centric" approach to managing relationships with important, complex data providers, STC has embraced a response management strategy that divides its business population into four tiers based on size, complexity and importance to survey estimates. Thus segmented, different response management approaches have been developed appropriate to the relative contribution of the segment. This allows STC to target resources to the areas where it stands to achieve the greatest return on investment. Tier I and Tier II have been defined as critical to survey estimates.
Tier I represent the largest, most complex businesses in Canada and is managed through the Enterprise Portfolio Management Program.
Tier II represents businesses that are smaller or less complex than Tier I but still significant in developing accurate measures of the activities of individual industries.
Tier III includes more medium-sized businesses, those that form the bulk of survey samples.
Tier IV represents the smallest businesses which are excluded from collection; for these STC relies entirely on tax information.
The presentation will outline:It works! Results and metrics from the programs that have operationalized the Holistic Response Management strategy.Developing a less subjective, methodological approach to segment the business survey population for HRM. The project team's work to capture the complexity factors intrinsically used by experienced staff to rank respondents. What our so called "problem" respondents have told us about the issues underlying non-response.
Release date: 2009-12-03 - 56. Non-response in a random digit dialling survey: The experience of the General Social Survey's Cycle 21 (2007) ArchivedArticles and reports: 11-522-X200800010994Description:
The growing difficulty of reaching respondents has a general impact on non-response in telephone surveys, especially those that use random digit dialling (RDD), such as the General Social Survey (GSS). The GSS is an annual multipurpose survey with 25,000 respondents. Its aim is to monitor the characteristics of and major changes in Canada's social structure. GSS Cycle 21 (2007) was about the family, social support and retirement. Its target population consisted of persons aged 45 and over living in the 10 Canadian provinces. For more effective coverage, part of the sample was taken from a follow-up with the respondents of GSS Cycle 20 (2006), which was on family transitions. The remainder was a new RDD sample. In this paper, we describe the survey's sampling plan and the random digit dialling method used. Then we discuss the challenges of calculating the non-response rate in an RDD survey that targets a subset of a population, for which the in-scope population must be estimated or modelled. This is done primarily through the use of paradata. The methodology used in GSS Cycle 21 is presented in detail.
Release date: 2009-12-03 - Articles and reports: 11-522-X200800010996Description:
In recent years, the use of paradata has become increasingly important to the management of collection activities at Statistics Canada. Particular attention has been paid to social surveys conducted over the phone, like the Survey of Labour and Income Dynamics (SLID). For recent SLID data collections, the number of call attempts was capped at 40 calls. Investigations of the SLID Blaise Transaction History (BTH) files were undertaken to assess the impact of the cap on calls.The purpose of the first study was to inform decisions as to the capping of call attempts, the second study focused on the nature of nonresponse given the limit of 40 attempts.
The use of paradata as auxiliary information for studying and accounting for survey nonresponse was also examined. Nonresponse adjustment models using different paradata variables gathered at the collection stage were compared to the current models based on available auxiliary information from the Labour Force Survey.
Release date: 2009-12-03 - Articles and reports: 11-522-X200800010999Description:
The choice of number of call attempts in a telephone survey is an important decision. A large number of call attempts makes the data collection costly and time-consuming; and a small number of attempts decreases the response set from which conclusions are drawn and increases the variance. The decision can also have an effect on the nonresponse bias. In this paper we study the effects of number of call attempts on the nonresponse rate and the nonresponse bias in two surveys conducted by Statistics Sweden: The Labour Force Survey (LFS) and Household Finances (HF).
By use of paradata we calculate the response rate as a function of the number of call attempts. To estimate the nonresponse bias we use estimates of some register variables, where observations are available for both respondents and nonrespondents. We also calculate estimates of some real survey parameters as functions of varying number of call attempts. The results indicate that it is possible to reduce the current number of call attempts without getting an increased nonresponse bias.
Release date: 2009-12-03 - Articles and reports: 11-522-X200800011000Description:
The present report reviews the results of a mailing experiment that took place within a large scale demonstration project. A postcard and stickers were sent to a random group of project participants in the period between a contact call and a survey. The researchers hypothesized that, because of the additional mailing (the treatment), the response rates to the upcoming survey would increase. There was, however, no difference between the response rates of the treatment group that received the additional mailing and the control group. In the specific circumstances of the mailing experiment, sending project participants a postcard and stickers as a reminder of the upcoming survey and of their participation in the pilot project was not an efficient way to increase response rates.
Release date: 2009-12-03 - 60. Is there really any benefit in sending out introductory letters in Random Digit Dialling (RDD) surveys? ArchivedArticles and reports: 11-522-X200800011001Description:
Currently underway, the Québec Population Health Survey (EQSP), for which collection will wrap up in February 2009, provides an opportunity, because of the size of its sample, to assess the impact that sending out introductory letters to respondents has on the response rate in a controlled environment. Since this regional telephone survey is expected to have more than 38,000 respondents, it was possible to use part of its sample for this study without having too great an impact on its overall response rate. In random digit dialling (RDD) surveys such as the EQSP, one of the main challenges in sending out introductory letters is reaching the survey units. Doing so depends largely on our capacity to associate an address with the sample units and on the quality of that information.
This article describes the controlled study proposed by the Institut de la statistique du Québec to measure the effect that sending out introductory letters to respondents had on the survey's response rate.
Release date: 2009-12-03
- Previous Go to previous page of Analysis results
- 1 Go to page 1 of Analysis results
- 2 Go to page 2 of Analysis results
- 3 Go to page 3 of Analysis results
- 4 Go to page 4 of Analysis results
- 5 Go to page 5 of Analysis results
- 6 (current) Go to page 6 of Analysis results
- 7 Go to page 7 of Analysis results
- ...
- 14 Go to page 14 of Analysis results
- Next Go to next page of Analysis results
Reference (1)
Reference (1) ((1 result))
- Surveys and statistical programs – Documentation: 75-005-M2023001Description: This document provides information on the evolution of response rates for the Labour Force Survey (LFS) and a discussion of the evaluation of two aspects of data quality that ensure the LFS estimates continue providing an accurate portrait of the Canadian labour market.Release date: 2023-10-30
- Date modified: