Adaptive survey designs to minimize survey mode effects – a case study on the Dutch Labor Force Survey 1. Introduction

In this paper, we propose and demonstrate the minimization of mode effects through adaptive survey designs when a survey has a single statistic or indicator. We demonstrate this method for the Dutch Labour Force Survey (LFS), which has the unemployment rate as the key indicator.

The emergence of web as a survey mode has led to a renewed discussion about mixed-mode surveys. Market research companies quickly incorporated web in their designs, official statistics institutes are slower, but also these institutes are considering mixed-mode designs including web as one of the modes. Reasons for studying mixed-mode designs include increased costs in carrying out face-to-face surveys, decreasing coverage in telephone surveys and low participation in Web surveys (Fan and Yan 2010). As a consequence, survey organizations are gradually restructuring their single mode designs into mixed-mode designs. A large-scale project Data Collection for the Social Surveys (DCSS) was initiated within the EU statistical system in 2012 to investigate mixed-mode survey designs for the Labor Force Survey (LFS), see Blanke and Luiten (2012).

It is well-known that the survey mode impacts both non-observation survey errors (item-nonresponse, unit-nonresponse and undercoverage) as well as observation survey errors (measurement error and processing error). The overall difference between two modes is usually referred to as the mode effect. The difference between the measurement errors of two modes is termed the pure mode effect or measurement effect, while the difference in undercoverage and nonresponse is termed the selection effect, see, for example, de Leeuw (2005), Dillman, Phelps, Tortora, Swift, Kohrell, Berck and Messes (2009), Vannieuwenhuyze (2013) and Klausch, Hox and Schouten (2013b) for extensive discussions. There is evidence (Jäckle, Roberts and Lynn 2010, Schouten, van den Brakel, Buelens, van der Laan, Burger and Klausch 2013b, Dillman et al. 2009) that mode effects can be large. They may lead to incomparable statistics in time or incomparable statistics over population subgroups. Assessing, minimizing and stabilizing the impact of mode effects on survey estimates has become an important goal.

There are four options to reduce the impact of mode effects in survey design and survey estimation. Thorough questionnaire design and data collection design should prevent them and survey estimation and calibration help accounting for mode effects by weighting. Careful questionnaire design reduces measurement differences between modes. This is possible by using a unified mode design for questionnaires, see Dillman et al. (2009), or by achieving an equivalent stimulus per mode, see de Leeuw (2005). Some measurement effects are, however, intrinsic to the survey mode administration process. For example, an oral versus visual presentation or the interview pace make it hard or impossible to completely remove such effects. Furthermore, questionnaire design cannot remove selection effects, although the length, layout and content may be a common cause to both measurement and selection effects. Also the history of the questions may prevent a questionnaire to be redesigned completely per mode as the survey users or stakeholders do not want to reduce the length of a questionnaire or change the wording of survey items. In summary, some mode effects will always remain, even after a thorough questionnaire redesign. If estimates of measurement effects and selection effects are available, then they can be used to design the data collection strategy of a survey, i.e., to avoid them, or to design the estimation strategy, i.e., to adjust them in future surveys.

The design option implies that some modes or sequences of modes are not applied because they are expected to lead to large mode effects with respect to some benchmark design, i.e., a survey design that is considered to be free of mode effects. The expectation of large mode effects is ideally based on pilot studies but may also lean on experience. When the choice of mode(s) is not made uniform over the whole sample but based on characteristics of persons or households, the survey design option amounts to an adaptive survey design, see Wagner (2008) and Schouten, Calinescu and Luiten (2013a). Such characteristics may be available before data collection starts or may become available during data collection in the form of paradata (i.e., data collection process data, see Kreuter 2013), leading to static and dynamic adaptive survey designs, respectively. The avoidance of mode effects by adaptive survey designs is the focus of this paper.

The adjustment option is especially interesting when there is a strong rationale or incentive to approximate true values of a statistic, i.e., when the focus is not just on comparability but also on accuracy of statistics. A drawback of the adjustment option is that it is more costly than the design option since precise estimates of mode effects are needed such that accuracy of resulting statistics is not affected. A benefit of the adjustment option is that it is more flexible. It allows for different adjustments to different survey variables, whereas the survey design option has to make an overall choice. We refer to Vannieuwenhuyze (2013), Klausch, Hox and Schouten (2013a) and Suzer-Gurtekin (2013) for a discussion of adjustment during estimation.

Another option is to stabilize mode effects, which is a useful last resort approach. Given that mode effects are conjectured to be present after questionnaire, data collection and estimation design, they can be stabilized over time by calibration of the distribution of modes in the response to some fixed distribution of modes. If the average proportion of a mode to response differs between months, the respondents to that mode get a larger weight and respondents to other modes get a smaller weight. For a discussion of this method, see Buelens and van den Brakel (2014).

In this paper, we minimize the adjusted method effect to a benchmark mode design by stratifying the population into relevant subgroups and assigning the different subgroups to different modes or sequences of modes. The adjusted method effect of a design is the difference of the nonresponse adjusted mean of that design to the nonresponse adjusted mean of the benchmark design. The adjustment follows standard procedures, i.e., calibration of response to a population distribution. Hence, the adjusted method effect is the compound of the measurement effect between the two designs and the residual selection effect between the two designs that is not removed by the nonresponse adjustment.

Adaptive survey designs and the closely resembling responsive survey designs (Heeringa and Groves 2006, Kreuter 2013) are traditionally applied to reduce nonresponse error. As far as we know, to date, only Calinescu and Schouten (2013a) have attempted to focus adaptive survey designs on measurement error or the combination of nonresponse and measurement error. The main reasons are, first, that adaptive and responsive survey designs are still in their infancy and are not widely applied, and, second, that measurement error and measurement effects are inherently hard to measure. Many applications of adaptive survey designs involve a single survey mode in which it is plausible that measurement error is relatively stable for different design choices. When the survey mode is one of the survey design features, then it is no longer plausible to make this assumption. The survey mode is, however, the most interesting design feature in adaptive survey designs due to its large quality-cost differential.

A complication that arises when including the measurement error into adaptive survey designs is that, unlike nonresponse error, it is not the result of a simple yes-no decision. A sample unit provides a response or nonresponse whereas measurement error also has a magnitude. The magnitude of the measurement error may vary per item in the survey questionnaire. This implies that with multiple survey items or variables the choice of modes is a multidimensional decision. Calinescu and Schouten (2013a) attempt to reduce this multidimensionality by using response styles (or response latencies). When a survey has only one or a few key variables, which is in fact the case for the LFS, this complication does not exist and the focus can be directly on the main variables. This is the path that we follow in the current paper.

In this paper, we, therefore, bring two novel elements: we include method effects due to modes into adaptive survey designs and we focus on a single key variable. In our demonstration for the Dutch LFS, we consider three survey modes, namely, web, phone and face-to-face, and various sequences of these modes. In recent years, the Dutch LFS design underwent a series of changes in its transition from a full face-to-face survey to a mixed-mode survey. Extensive knowledge and historical survey data on the interaction between survey design features, the survey mode in particular, and the response process is available. We use this data to estimate the various parameters that are needed for the optimization model.

The outline of the paper is as follows. In Section 2, we formulate the multi-mode optimization problem. In Section 3, we describe an algorithm for the optimization of the mode effect problem. We present the optimization results in Section 4. In Section 5, we discuss the results of the paper. Appendix A and B provide extensions to be numerical results of Section 4.

Report a problem on this page

Is something not working? Is there information outdated? Can't find what you're looking for?

Please contact us and let us know how we can help you.

Privacy notice

Date modified: