# Multiple imputation of missing values in household data with structural zeros Section 4. Strategies for speeding up the MCMC sampler

The rejection sampling step in the Gibbs sampler in Section 2.2 can be inefficient when $\mathcal{S}$ is large (Manrique-Vallier and Reiter, 2014; Hu et al., 2018), as the sampler tends to generate many impossible households before getting enough feasible ones. In addition, it takes computing time to check whether or not each sampled household satisfies all the structural zero rules. These computational costs are compounded when the sampler also incorporates missing values. In this section, we present two strategies that can reduce the number of impossible households that the algorithm generates, thereby speeding up the sampler. The Appendix includes simulation studies showing that both strategies can speed up the MCMC significantly.

## 4.1  Moving the household head to the household level

Many datasets include a variable recording the relationship of each individual to the household head. There can be only one household head in any household. This restriction can account for a large proportion of the combinations in $\mathcal{S}.$ As a simple working example, consider a dataset that contains $n=\text{1,000}$ households of size two, resulting in a total of $N=\text{2,000}$ individuals. Suppose the data contain no household-level variables and two individual-level variables, age and relationship to household head. Also, suppose age has 100 levels while relationship to household head has 13 levels, which include household head, spouse of the household head, etc. Then, $\mathcal{C}$ contains ${13}^{2}×{100}^{2}=\text{1}\text{.69}×{10}^{6}$ combinations. Suppose the rule, “each household must contain exactly one head”, is the only structural zero rule defined on the dataset. Then, $\mathcal{S}$ contains $\text{1}\text{.45}×{10}^{6}$ impossible combinations, approximately 86% the size of $\mathcal{C}.$ If, for example, the model assigns uniform probability to all combinations in $\mathcal{C},$ we would expect to sample about $\left(\text{0}\text{.86}/\text{0}\text{.14}\right)*\text{1,000}\approx \text{6,143}$ impossible households at every iteration to augment the $n$ feasible households.

Instead, we treat the variables for the household head as a household-level characteristic. This eliminates structural zero rules defined on the household head alone. Using the working example, moving the household head to the household level results in one new household-level variable, age of household head, which has 100 levels. The relationship to household head variable can be ignored for household heads. For others in the household, the relationship to household head variable now has 12 levels, with the level corresponding to “household head” removed. Thus, $\mathcal{C}$ contains $12×{100}^{2}=\text{1}\text{.20}×{10}^{5}$ combinations, and $\mathcal{S}$ contains zero impossible combinations. We wouldn’t even need to sample impossible households in the Gibbs sampler in Section 2.2.

In general, this strategy can reduce the size of $\mathcal{S}$ significantly, albeit usually not to zero as in the simple example here since $\mathcal{S}$ usually contains combinations resulting from other types of structural zero rules. This strategy is not a replacement for the rejection sampler in Section 2.2; rather, it is a data reformatting technique that can be combined with the sampler.

## 4.2  Setting an upper bound on the number of impossible households to sample

To reduce computation time, we can put an upper bound on the number of sampled cases in ${\mathcal{X}}^{0}\text{​}.$ One way to achieve this is to replace ${n}_{1h}$ in step S1(f) of Section 2.2 with $⌈{n}_{1h}×{\psi }_{h}⌉,$ for some ${\psi }_{h}$ such that $1/{\psi }_{h}$ is a positive integer, so that we sample only approximately $⌈{n}_{0h}×{\psi }_{h}⌉$ impossible households for each $h\in \mathcal{H}.$ However, doing so underestimates the actual probability mass assigned to $\mathcal{S}$ by the model. We can illustrate this using the simple example of Section 4.1. Suppose the model assigns uniform probability to all combinations in $\mathcal{C}$ as before. We set ${\psi }_{2}=\text{0}\text{.5},$ so that we sample approximately $\text{3,072}=⌈\text{6,143}×\text{0}\text{.5}⌉$ impossible households in every iteration of the MCMC sampler. The probability of generating one impossible household is $\text{3,072}/\left(\text{1,000}+\text{3,072}\right)=0.75,$ a decrease from the actual value of 0.86. Therefore, we would underestimate the true contribution of $\left\{{\mathcal{X}}^{0}\text{​},\text{\hspace{0.17em}}{G}^{0}\text{​},\text{\hspace{0.17em}}{M}^{0}\right\}$ to the likelihood.

To use the cap-and-weight approach, we need to apply a correction that re-weights the contribution of $\left\{{\mathcal{X}}^{0}\text{​},\text{\hspace{0.17em}}{G}^{0}\text{​},\text{\hspace{0.17em}}{M}^{0}\right\}$ to the full joint likelihood. We do so using ideas akin to those used by Chambers and Skinner (2003) and Savitsky and Toth (2016), approximating the likelihood of the full unobserved data with a “pseudo” likelihood using weights (the $1/{\psi }_{h}’\text{s)}\text{.}$ The impossible households only contribute to the full joint likelihood through the discrete distributions in (2.3) to (2.6). The sufficient statistics for estimating the parameters of the discrete distributions in (2.3) to (2.6) are the observed counts for the corresponding variables in the set $\left\{{\mathcal{X}}^{1}\text{​},\text{\hspace{0.17em}}{G}^{1}\text{​},\text{\hspace{0.17em}}{M}^{1}\text{​},\text{\hspace{0.17em}}{\mathcal{X}}^{0}\text{​},\text{\hspace{0.17em}}{G}^{0}\text{​},\text{\hspace{0.17em}}{M}^{0}\right\},$ within each latent class for the household-level variables and within each latent class pair for the individual-level variables. Thus, for each $h\in \mathcal{H},$ we can re-weight the contribution of impossible households by multiplying the observed counts for households of size $h$ in $\left\{{\mathcal{X}}^{0}\text{​},\text{\hspace{0.17em}}{G}^{0}\text{​},\text{\hspace{0.17em}}{M}^{0}\right\}$ by $1/{\psi }_{h}$ for the corresponding variable and latent classes. This raises the likelihood contribution of impossible households of size $h$ to the power of $1/{\psi }_{h}\text{​}.$ Clearly, $1/{\psi }_{h}$ need not be a positive integer. We require that only to make its multiplication with the observed counts free of decimals. We modify the Gibbs sampler to incorporate the cap-and-weight approach by replacing steps S1, S3, S4, S5 and S6; see the Appendix for the modified steps.

Setting each ${\psi }_{h}=1$ corresponds to the original rejection sampler, so that the two approaches should provide very similar results when ${\psi }_{h}$ near 1. Based on our experience, results of the cap-and-weight approach become significantly less accurate than the regular rejection sampler when ${\psi }_{h}<1/4.$ The time gained using this speedup approach in comparison to the regular sampler depends on the features of the data and the specified values for the weights $\left\{{\psi }_{h}\text{ }:\text{\hspace{0.17em}}h\in \mathcal{H}\right\}.$ To select the ${\psi }_{h}’\text{s,}$ we suggest trying out different values $–$ starting with values close to one $–$ in initial runs of the MCMC sampler on a small random sample of the data. Analysts should examine the convergence and mixing behavior of the chains in comparison to the chain with all the ${\psi }_{h}’\text{s}$ set to one, and select values that offer reasonable speedup while preserving convergence and mixing. This can be done quickly by comparing trace plots of a random set of parameters from the model that are not subject to label switching, such as $\alpha$ and $\beta ,$ or by examining marginal, bivariate and trivariate probabilities estimated from synthetic data generated from the MCMC.

﻿

Is something not working? Is there information outdated? Can't find what you're looking for?