Response and nonresponse

Filter results by

Search Help
Currently selected filters that can be removed

Keyword(s)

Geography

1 facets displayed. 0 facets selected.

Content

1 facets displayed. 0 facets selected.
Sort Help
entries

Results

All (141)

All (141) (0 to 10 of 141 results)

  • Articles and reports: 12-001-X202400100009
    Description: Our comments respond to discussion from Sen, Brick, and Elliott. We weigh the potential upside and downside of Sen’s suggestion of using machine learning to identify bogus respondents through interactions and improbable combinations of variables. We join Brick in reflecting on bogus respondents’ impact on the state of commercial nonprobability surveys. Finally, we consider Elliott’s discussion of solutions to the challenge raised in our study.
    Release date: 2024-06-25

  • Articles and reports: 12-001-X202400100010
    Description: This discussion summarizes the interesting new findings around measurement errors in opt-in surveys by Kennedy, Mercer and Lau (KML). While KML enlighten readers about “bogus responding” and possible patterns in them, this discussion suggests combining these new-found results with other avenues of research in nonprobability sampling, such as improvement of representativeness.
    Release date: 2024-06-25

  • Articles and reports: 12-001-X202400100011
    Description: Kennedy, Mercer, and Lau explore misreporting by respondents in non-probability samples and discover a new feature, namely that of deliberate misreporting of demographic characteristics. This finding suggests that the “arms race” between researchers and those determined to disrupt the practice of social science is not over and researchers need to account for such respondents if using high-quality probability surveys to help reduce error in non-probability samples.
    Release date: 2024-06-25

  • Articles and reports: 12-001-X202400100012
    Description: Nonprobability samples are quick and low-cost and have become popular for some types of survey research. Kennedy, Mercer and Lau examine data quality issues associated with opt-in nonprobability samples frequently used in the United States. They show that the estimates from these samples have serious problems that go beyond representativeness. A total survey error perspective is important for evaluating all types of surveys.
    Release date: 2024-06-25

  • Articles and reports: 12-001-X202400100013
    Description: Statistical approaches developed for nonprobability samples generally focus on nonrandom selection as the primary reason survey respondents might differ systematically from the target population. Well-established theory states that in these instances, by conditioning on the necessary auxiliary variables, selection can be rendered ignorable and survey estimates will be free of bias. But this logic rests on the assumption that measurement error is nonexistent or small. In this study we test this assumption in two ways. First, we use a large benchmarking study to identify subgroups for which errors in commercial, online nonprobability samples are especially large in ways that are unlikely due to selection effects. Then we present a follow-up study examining one cause of the large errors: bogus responding (i.e., survey answers that are fraudulent, mischievous or otherwise insincere). We find that bogus responding, particularly among respondents identifying as young or Hispanic, is a significant and widespread problem in commercial, online nonprobability samples, at least in the United States. This research highlights the need for statisticians working with commercial nonprobability samples to address bogus responding and issues of representativeness – not just the latter.
    Release date: 2024-06-25

  • Articles and reports: 75-005-M2024001
    Description: From 2010 to 2019, the Labour Force Survey (LFS) response rate – or the proportion of selected households who complete an LFS interview – had been on a slow downward trend, due to a range of social and technological changes which have made it more challenging to contact selected households and to persuade Canadians to participate when they are contacted. These factors were exacerbated by the COVID-19 pandemic, which resulted in the suspension of face-to-face interviewing between April 2020 and fall 2022. Statistics Canada is committed to restoring LFS response rates to the greatest extent possible. This technical paper discusses two initiatives that are underway to ensure that the LFS estimates continue to provide an accurate and representative portrait of the Canadian labour market.
    Release date: 2024-02-16

  • Articles and reports: 12-001-X202300200006
    Description: Survey researchers are increasingly turning to multimode data collection to deal with declines in survey response rates and increasing costs. An efficient approach offers the less costly modes (e.g., web) followed with a more expensive mode for a subsample of the units (e.g., households) within each primary sampling unit (PSU). We present two alternatives to this traditional design. One alternative subsamples PSUs rather than units to constrain costs. The second is a hybrid design that includes a clustered (two-stage) sample and an independent, unclustered sample. Using a simulation, we demonstrate the hybrid design has considerable advantages.
    Release date: 2024-01-03

  • Surveys and statistical programs – Documentation: 75-005-M2023001
    Description: This document provides information on the evolution of response rates for the Labour Force Survey (LFS) and a discussion of the evaluation of two aspects of data quality that ensure the LFS estimates continue providing an accurate portrait of the Canadian labour market.
    Release date: 2023-10-30

  • Stats in brief: 11-001-X202231822683
    Description: Release published in The Daily – Statistics Canada’s official release bulletin
    Release date: 2022-11-14

  • Articles and reports: 89-648-X2022001
    Description:

    This report explores the size and nature of the attrition challenges faced by the Longitudinal and International Study of Adults (LISA) survey, as well as the use of a non-response weight adjustment and calibration strategy to mitigate the effects of attrition on the LISA estimates. The study focuses on data from waves 1 (2012) to 4 (2018) and uses practical examples based on selected demographic variables, to illustrate how attrition be assessed and treated.

    Release date: 2022-11-14
Data (0)

Data (0) (0 results)

No content available at this time.

Analysis (140)

Analysis (140) (50 to 60 of 140 results)

  • Articles and reports: 11-522-X200800010960
    Description:

    Non-response is inevitable in any survey, despite all the effort put into reducing it at the various stages of the survey. In particular, non-response can cause bias in the estimates. In addition, non-response is an especially serious problem in longitudinal studies because the sample shrinks over time. France's ELFE (Étude Longitudinale Française depuis l'Enfance) is a project that aims to track 20,000 children from birth to adulthood using a multidisciplinary approach. This paper is based on the results of the initial pilot studies conducted in 2007 to test the survey's feasibility and acceptance. The participation rates are presented (response rate, non-response factors) along with a preliminary description of the non-response treatment methods being considered.

    Release date: 2009-12-03

  • Articles and reports: 11-522-X200800010975
    Description:

    A major issue in official statistics is the availability of objective measures supporting the based-on-fact decision process. Istat has developed an Information System to assess survey quality. Among other standard quality indicators, nonresponse rates are systematically computed and stored for all surveys. Such a rich information base permits analysis over time and comparisons among surveys. The paper focuses on the analysis of interrelationships between data collection mode and other survey characteristics on total nonresponse. Particular attention is devoted to the extent to which multi-mode data collection improves response rates.

    Release date: 2009-12-03

  • Articles and reports: 11-522-X200800010976
    Description:

    Many survey organizations use the response rate as an indicator for the quality of survey data. As a consequence, a variety of measures are implemented to reduce non-response or to maintain response at an acceptable level. However, the response rate is not necessarily a good indicator of non-response bias. A higher response rate does not imply smaller non-response bias. What matters is how the composition of the response differs from the composition of the sample as a whole. This paper describes the concept of R-indicators to assess potential differences between the sample and the response. Such indicators may facilitate analysis of survey response over time, between various fieldwork strategies or data collection modes. Some practical examples are given.

    Release date: 2009-12-03

  • Articles and reports: 11-522-X200800010983
    Description:

    The US Census Bureau conducts monthly, quarterly, and annual surveys of the American economy and a census every 5 years. These programs require significant business effort. New technologies, new forms of organization, and scarce resources affect the ability of businesses to respond. Changes also affect what businesses expect from the Census Bureau, the Census Bureau's internal systems, and the way businesses interact with the Census Bureau.

    For several years, the Census Bureau has provided a special relationship to help large companies prepare for the census. We also have worked toward company-centric communication across all programs. A relationship model has emerged that focuses on infrastructure and business practices, and allows the Census Bureau to be more responsive.

    This paper focuses on the Census Bureau's company-centric communications and systems. We describe important initiatives and challenges, and we review their impact on Census Bureau practices and respondent behavior.

    Release date: 2009-12-03

  • Articles and reports: 11-522-X200800010984
    Description:

    The Enterprise Portfolio Manager (EPM) Program at Statistics Canada demonstrated the value of employing a "holistic" approach to managing the relationships we have with our largest and most complex business respondents.

    Understanding that different types of respondents should receive different levels of intervention and having learnt the value of employing an "enterprise-centric" approach to managing relationships with important, complex data providers, STC has embraced a response management strategy that divides its business population into four tiers based on size, complexity and importance to survey estimates. Thus segmented, different response management approaches have been developed appropriate to the relative contribution of the segment. This allows STC to target resources to the areas where it stands to achieve the greatest return on investment. Tier I and Tier II have been defined as critical to survey estimates.

    Tier I represent the largest, most complex businesses in Canada and is managed through the Enterprise Portfolio Management Program.

    Tier II represents businesses that are smaller or less complex than Tier I but still significant in developing accurate measures of the activities of individual industries.

    Tier III includes more medium-sized businesses, those that form the bulk of survey samples.

    Tier IV represents the smallest businesses which are excluded from collection; for these STC relies entirely on tax information.

    The presentation will outline:It works! Results and metrics from the programs that have operationalized the Holistic Response Management strategy.Developing a less subjective, methodological approach to segment the business survey population for HRM. The project team's work to capture the complexity factors intrinsically used by experienced staff to rank respondents. What our so called "problem" respondents have told us about the issues underlying non-response.

    Release date: 2009-12-03

  • Articles and reports: 11-522-X200800010994
    Description:

    The growing difficulty of reaching respondents has a general impact on non-response in telephone surveys, especially those that use random digit dialling (RDD), such as the General Social Survey (GSS). The GSS is an annual multipurpose survey with 25,000 respondents. Its aim is to monitor the characteristics of and major changes in Canada's social structure. GSS Cycle 21 (2007) was about the family, social support and retirement. Its target population consisted of persons aged 45 and over living in the 10 Canadian provinces. For more effective coverage, part of the sample was taken from a follow-up with the respondents of GSS Cycle 20 (2006), which was on family transitions. The remainder was a new RDD sample. In this paper, we describe the survey's sampling plan and the random digit dialling method used. Then we discuss the challenges of calculating the non-response rate in an RDD survey that targets a subset of a population, for which the in-scope population must be estimated or modelled. This is done primarily through the use of paradata. The methodology used in GSS Cycle 21 is presented in detail.

    Release date: 2009-12-03

  • Articles and reports: 11-522-X200800010996
    Description:

    In recent years, the use of paradata has become increasingly important to the management of collection activities at Statistics Canada. Particular attention has been paid to social surveys conducted over the phone, like the Survey of Labour and Income Dynamics (SLID). For recent SLID data collections, the number of call attempts was capped at 40 calls. Investigations of the SLID Blaise Transaction History (BTH) files were undertaken to assess the impact of the cap on calls.The purpose of the first study was to inform decisions as to the capping of call attempts, the second study focused on the nature of nonresponse given the limit of 40 attempts.

    The use of paradata as auxiliary information for studying and accounting for survey nonresponse was also examined. Nonresponse adjustment models using different paradata variables gathered at the collection stage were compared to the current models based on available auxiliary information from the Labour Force Survey.

    Release date: 2009-12-03

  • Articles and reports: 11-522-X200800010999
    Description:

    The choice of number of call attempts in a telephone survey is an important decision. A large number of call attempts makes the data collection costly and time-consuming; and a small number of attempts decreases the response set from which conclusions are drawn and increases the variance. The decision can also have an effect on the nonresponse bias. In this paper we study the effects of number of call attempts on the nonresponse rate and the nonresponse bias in two surveys conducted by Statistics Sweden: The Labour Force Survey (LFS) and Household Finances (HF).

    By use of paradata we calculate the response rate as a function of the number of call attempts. To estimate the nonresponse bias we use estimates of some register variables, where observations are available for both respondents and nonrespondents. We also calculate estimates of some real survey parameters as functions of varying number of call attempts. The results indicate that it is possible to reduce the current number of call attempts without getting an increased nonresponse bias.

    Release date: 2009-12-03

  • Articles and reports: 11-522-X200800011000
    Description:

    The present report reviews the results of a mailing experiment that took place within a large scale demonstration project. A postcard and stickers were sent to a random group of project participants in the period between a contact call and a survey. The researchers hypothesized that, because of the additional mailing (the treatment), the response rates to the upcoming survey would increase. There was, however, no difference between the response rates of the treatment group that received the additional mailing and the control group. In the specific circumstances of the mailing experiment, sending project participants a postcard and stickers as a reminder of the upcoming survey and of their participation in the pilot project was not an efficient way to increase response rates.

    Release date: 2009-12-03

  • Articles and reports: 11-522-X200800011001
    Description:

    Currently underway, the Québec Population Health Survey (EQSP), for which collection will wrap up in February 2009, provides an opportunity, because of the size of its sample, to assess the impact that sending out introductory letters to respondents has on the response rate in a controlled environment. Since this regional telephone survey is expected to have more than 38,000 respondents, it was possible to use part of its sample for this study without having too great an impact on its overall response rate. In random digit dialling (RDD) surveys such as the EQSP, one of the main challenges in sending out introductory letters is reaching the survey units. Doing so depends largely on our capacity to associate an address with the sample units and on the quality of that information.

    This article describes the controlled study proposed by the Institut de la statistique du Québec to measure the effect that sending out introductory letters to respondents had on the survey's response rate.

    Release date: 2009-12-03
Reference (1)

Reference (1) ((1 result))

  • Surveys and statistical programs – Documentation: 75-005-M2023001
    Description: This document provides information on the evolution of response rates for the Labour Force Survey (LFS) and a discussion of the evaluation of two aspects of data quality that ensure the LFS estimates continue providing an accurate portrait of the Canadian labour market.
    Release date: 2023-10-30
Date modified: