# Adaptive survey designs to minimize survey mode effects – a case study on the Dutch Labor Force Survey 2. The multi-mode optimization problem

In this section, we construct the multi-mode optimization problem that accounts for mode effects on a single key survey variable. Apart from the survey mode, we also consider caps on the number of calls in telephone and face-to-face as design features in the optimization. In the optimization model, we allow different design features to be assigned to different subpopulations. Hence, the optimization may lead to an adaptive survey design; it does so when the optimal allocation probabilities differ over the subpopulations. In our case, the subpopulations are built on linked administrative data. Note that they could also be built based on paradata collected during the early stages of the survey. The last component to the optimization problem is given by a set of explicit quality and cost functions. In our case, the quality functions are derived from mode differences in selection and measurement bias and from requirements on the precision of statistics. As a cost function, we use the total variable costs of the survey design. In the following paragraphs, we discuss the components of the optimization problem.

We begin with the survey design features contained in the survey strategy set $\mathcal{S}.$ We consider single mode and sequential mixed-mode strategies, i.e., a strategies where nonrespondents in a mode are followed-up in another mode. A single mode would be labelled as $M$ and a sequential mixed-mode as ${M}_{1}\to {M}_{2}.$ We consider Web, telephone and face-to-face survey as the modes of interest and abbreviate them to $Web,$ $Tel$ and $F2F.$ Examples of single mode and sequential mixed mode are $Tel$ and $Web\to F2F,$ respectively. For interview modes, we additionally consider a cap $k$ on the number of calls, denoted as $Mk.$ For example, $F2F3$ denotes a single mode survey strategy that uses face-to-face with a maximum of three visits. We let $Mk+$ denote the counterpart strategy where there is no explicit cap. We do not consider concurrent mixed-mode strategies (two or more modes are offered simultaneously to sample units) in this paper. This restriction is without loss of generality. It would be straightforward to apply the methodology to any set of multi-mode strategies, including hybrid forms of sequential and concurrent mixed-mode strategies. A wide or diffuse set of strategies will, however, come at the cost of a larger number of input parameters that need to be estimated. The survey strategy set $\mathcal{S}$ explicitly includes the empty strategy, denoted by $\Phi ,$ which represents the case where a population unit is not sampled, i.e., no action is taken to get a response from the unit. We let ${\mathcal{S}}^{R}=\mathcal{S}\backslash \left\{\Phi \right\}$ denote the set of real, non-empty strategies.

Population units are clustered into $\mathcal{G}=\left\{\mathrm{1,}\dots \mathrm{,}G\right\}$ groups given a set of characteristics $X$ such as age, ethnicity, that can be extracted from external sources of data or from paradata. Let $p\left(s\mathrm{,}g\right)$ be the allocation probability of strategy $s$ to group $g,$ i.e., a proportion $p\left(s\mathrm{,}g\right)$ from subpopulation $g$ is sampled and approached through strategy $s.$ In general, it may hold that multiple strategies have non-zero allocation probabilities, so that the subpopulation is divided over multiple strategies. Define the allocation probability $p\left(\Phi \mathrm{,}g\right)$ as the probability that a unit from subpopulation $g$ is not included in the sample. The ratio $p\left(s\mathrm{,}g\right)/\left(1-p\left(\Phi \mathrm{,}g\right)\right)$ is the probability that a unit is assigned strategy $s$ given that it has been sampled. For example, if only the allocation probabilities to the empty strategy $p\left(\Phi \mathrm{,}g\right)$ vary and the allocation probabilities $p\left(s\mathrm{,}g\right)\mathrm{,}\forall s\in {\mathcal{S}}^{R}$ are equal conditional on being sampled, then the design is stratified but non-adaptive. The probabilities must satisfy

$$\begin{array}{lll}{\displaystyle \sum _{s\in {\mathcal{S}}^{R}}p\left(s\mathrm{,}g\right)}+p\left(\Phi \mathrm{,}g\right)\hfill & =\hfill & \mathrm{1,}\text{\hspace{0.17em}}\forall g\in \mathcal{G}\mathrm{,}\hfill \\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}0\le p\left(s\mathrm{,}g\right)\hfill & \le \hfill & \mathrm{1,}\text{\hspace{0.17em}}\forall s\in \mathcal{S}\mathrm{,}\text{\hspace{0.17em}}g\in \mathcal{G}\mathrm{.}\hfill \end{array}\text{\hspace{1em}}\text{\hspace{1em}}\text{\hspace{1em}}\text{\hspace{1em}}\text{\hspace{1em}}(2.1)$$

The allocation probabilities of survey strategies assigned to subpopulations $p\left(s\mathrm{,}g\right)$ define the decision variables in the optimization model. More generally, and analogous to sampling designs, one could allow for dependencies between population units being sampled and/or being allocated to non-empty strategies $s\in {\mathcal{S}}^{R}.$ We will not add that complexity here, but assume independence.

We now discuss the quality and cost functions. We assume that the interest lies in estimating the population means of a survey variable $y.$ Given that we consider the survey mode as one of the design features, we view the nonresponse adjusted bias on $y$ between the proposed design and a specified benchmark design $\text{BM}$ as the main quality function. This bias may be viewed as the adjusted method effect with respect to $\text{BM},$ and it is a mix of mode-specific measurement biases and remaining mode-specific nonresponse biases after adjustment. If both the proposed design and the benchmark design are single mode, then the bias is a true (adjusted) mode effect. If one of the designs is multi-mode, then the bias represents a complex mixture of mode effects, see for instance Klausch, Hox and Schouten (2014).

Let ${N}_{g}$ be the population size of group $g,$ ${w}_{g}={N}_{g}/N$ be the proportion of group $g$ in the population of size $N,$ and $\rho \left(s\mathrm{,}g\right)$ be the response propensity for group $g$ if strategy $s$ is assigned. For a specific group, we define the adjusted method effect as the nonresponse adjusted difference between the survey estimate ${\overline{y}}_{s,g}$ and a benchmark estimate ${\overline{y}}_{g}^{\text{BM}}$ of the population mean $\overline{Y},$ where the survey estimate ${\overline{y}}_{s\mathrm{,}g}$ is obtained by allocating strategy $s\in {\mathcal{S}}^{R}$ to subpopulation $g\in \mathcal{G}.$ Let $D\left(s\mathrm{,}g\right)$ denote this difference. The adjusted method effect is expressed as

$$D\left(s\mathrm{,}g\right)={\overline{y}}_{s\mathrm{,}g}-{\overline{y}}_{g}^{\text{BM}}\mathrm{,}\text{\hspace{0.17em}}\forall s\in {\mathcal{S}}^{R}\mathrm{,}\text{\hspace{0.17em}}g\in \mathcal{G}\mathrm{.}\text{\hspace{1em}}\text{\hspace{1em}}\text{\hspace{1em}}\text{\hspace{1em}}\text{\hspace{1em}}(2.2)$$

For
convenience, we omit the adjective “adjusted”' in the following and refer to
$D\left(s\mathrm{,}g\right)$
simply as the *method effect*.

In this paper, we seek to minimize the expected absolute overall method effect with respect to a given benchmark design $\text{BM},$ which is the weighted average of the method effects $D\left(s\mathrm{,}g\right)$ per stratum and strategy to $\text{BM}.$ The expected absolute overall method effect with respect to $\text{BM}$ is equal to

$${\overline{D}}^{\text{BM}}=\left|\text{\hspace{0.17em}}{\displaystyle \sum _{g\in \mathcal{G}}}{w}_{g}\frac{{\displaystyle \sum _{s\in {\mathcal{S}}^{R}}}p\left(s\mathrm{,}g\right)\rho \left(s\mathrm{,}g\right)D\left(s\mathrm{,}g\right)}{{\displaystyle \sum _{s\in {\mathcal{S}}^{R}}}p\left(s\mathrm{,}g\right)\rho \left(s\mathrm{,}g\right)}\text{\hspace{0.17em}}\right|.\text{\hspace{1em}}\text{\hspace{1em}}\text{\hspace{1em}}\text{\hspace{1em}}\text{\hspace{1em}}(2.3)$$

This objective function represents the expected shift in the time series of the key survey statistic when a redesign is implemented from the benchmark design to the adaptive design using allocation probabilities $p\left(s\mathrm{,}g\right).$ If a survey is new or if the benchmark design was never actually fielded, the objective function represents the bias of the adaptive survey design to the benchmark design. It is, therefore, a very useful objective function. Note that ${\overline{y}}_{s\mathrm{,}g}$ is a nonresponse adjusted estimate of $\overline{Y},$ while $\rho \left(s\mathrm{,}g\right)$ is an unweighted estimate of the group $g$ response probability in strategy $s.$ We implicitly assume that the nonresponse adjustment does not influence the contribution of each group and strategy to the overall response. This allows us to write the objective function as in (2.4), while performing nonresponse adjustment within the optimization framework may lead to a very complex, perhaps even unsolvable, problem. We minimize the overall method effect ${\overline{D}}^{\text{BM}}$ by optimally assigning strategies $s\in {\mathcal{S}}^{R}$ to the groups $g\in \mathcal{G},$ i.e.,

$$\underset{p\left(s\mathrm{,}g\right)}{\text{minimize}}\text{\hspace{0.17em}}{\overline{D}}^{\text{BM}}.\text{\hspace{1em}}\text{\hspace{1em}}\text{\hspace{1em}}\text{\hspace{1em}}\text{\hspace{1em}}(2.4)$$

Ideally, ${\overline{D}}^{\text{BM}}=0.$ However, achieving this situation may have serious practical issues such as requiring unlimited resources. Therefore, various practical aspects such as scarcity in resources are reflected through a number of constraints in our model. A limited budget $B$ is available to setup and run the survey. Let $c\left(s\mathrm{,}g\right)$ be the unit cost of applying strategy $s$ to one unit in group $g.$ The cost constraint is formulated as follows

$$\sum _{s\mathrm{,}g}{N}_{g}}p\left(s\mathrm{,}g\right)c\left(s\mathrm{,}g\right)\le B\mathrm{.}\text{\hspace{1em}}\text{\hspace{1em}}\text{\hspace{1em}}\text{\hspace{1em}}\text{\hspace{1em}}(2.5)$$

To ensure a minimal precision for the survey estimate of $\overline{Y},$ a minimum number ${R}_{g}$ of respondents per group is required. This translates to the following constraint

$$\sum _{s\in {\mathcal{S}}^{R}}{N}_{g}}p\left(s\mathrm{,}g\right)\rho \left(s\mathrm{,}g\right)\ge {R}_{g}\mathrm{,}\text{\hspace{0.17em}}\forall g\in \mathcal{G}\mathrm{.}\text{\hspace{1em}}\text{\hspace{1em}}\text{\hspace{1em}}\text{\hspace{1em}}\text{\hspace{1em}}(2.6)$$

In addition to the objective function, the method effect between the proposed design and the benchmark design is also part of a constraint in the optimization problem: a constraint on comparability of population subgroups. The overall method effect as an objective function could lead to an unbalanced solution. For example, let a group $g$ be assigned a strategy $s$ such that the corresponding $D\left(s\mathrm{,}g\right)$ is a large negative value and the other groups $h\in \mathcal{G}\backslash \left\{g\right\}$ receive strategies that yield positive $D\left(s\mathrm{,}h\right)$ values. The large negative $D\left(s\mathrm{,}g\right)$ is canceled out but group $g$ will have a very different behavior compared to the other groups, and this complicates comparisons among groups. To prevent the occurrence of such designs, we limit the absolute difference in the method effect between two groups by the following constraint

$$\underset{g\mathrm{,}h\in \mathcal{G}}{\mathrm{max}}\left\{\frac{{\displaystyle \sum _{s\in {\mathcal{S}}^{R}}}p\left(s\mathrm{,}g\right)\rho \left(s\mathrm{,}g\right)D\left(s\mathrm{,}g\right)}{{\displaystyle \sum _{s\in {\mathcal{S}}^{R}}}p\left(s\mathrm{,}g\right)\rho \left(s\mathrm{,}g\right)}-\frac{{\displaystyle \sum _{s\in {\mathcal{S}}^{R}}}p\left(s\mathrm{,}h\right)\rho \left(s\mathrm{,}h\right)D\left(s\mathrm{,}h\right)}{{\displaystyle \sum _{s\in {\mathcal{S}}^{R}}}p\left(s\mathrm{,}h\right)\rho \left(s\mathrm{,}h\right)}\right\}\le M\mathrm{.}\text{\hspace{1em}}\text{\hspace{1em}}\text{\hspace{1em}}\text{\hspace{1em}}\text{\hspace{1em}}(2.7)$$

However, when

$$\frac{{\displaystyle \sum _{s\in {\mathcal{S}}^{R}}}p\left(s\mathrm{,}g\right)\rho \left(s\mathrm{,}g\right)D\left(s\mathrm{,}g\right)}{{\displaystyle \sum _{s\in {\mathcal{S}}^{R}}}p\left(s\mathrm{,}g\right)\rho \left(s\mathrm{,}g\right)}-\frac{{\displaystyle \sum _{s\in {\mathcal{S}}^{R}}}p\left(s\mathrm{,}h\right)\rho \left(s\mathrm{,}h\right)D\left(s\mathrm{,}h\right)}{{\displaystyle \sum _{s\in {\mathcal{S}}^{R}}}p\left(s\mathrm{,}h\right)\rho \left(s\mathrm{,}h\right)}\le M\text{\hspace{1em}}\text{\hspace{1em}}\text{\hspace{1em}}\text{\hspace{1em}}\text{\hspace{1em}}(2.8)$$

is included in the optimization problem for each pair $\left(g\mathrm{,}h\right)\in \mathcal{G},$ then (2.7) is automatically satisfied. For practical reasons, i.e., a depletion of the sampling frame, we also introduce a constraint on the maximum sample size ${S}_{\text{max}},$ i.e.,

$$\sum _{s\mathrm{,}g}{N}_{g}}p\left(s\mathrm{,}g\right)\le {S}_{\text{max}}\mathrm{.}\text{\hspace{1em}}\text{\hspace{1em}}\text{\hspace{1em}}\text{\hspace{1em}}\text{\hspace{1em}}(2.9)$$

Additionally, we require that at least one $p\left(s\mathrm{,}g\right)$ be strictly positive,

$$\sum _{s\in {\mathcal{S}}^{R}}p}\left(s\mathrm{,}g\right)\text{\hspace{0.17em}}\text{>}\text{\hspace{0.17em}}0,\forall g\in \mathcal{G}\mathrm{,}\text{\hspace{1em}}\text{\hspace{1em}}\text{\hspace{1em}}\text{\hspace{1em}}\text{\hspace{1em}}(2.10)$$

to avoid computational errors such as division by zero in (2.8).

Objective function (2.4) together with constraints (2.1), (2.5) $-$ (2.10) form the multi-mode optimization problem to minimize method effects against a benchmark through adaptive survey designs. This problem is a nonconvex nonlinear problem.

- Date modified: