Analysis
Filter results by
Search HelpKeyword(s)
Year of publication
Author(s)
Results
All (13)
All (13) (0 to 10 of 13 results)
- Articles and reports: 12-001-X202300200017Description: Jean-Claude Deville, who passed away in October 2021, was one of the most influential researchers in the field of survey statistics over the past 40 years. This article traces some of his contributions that have had a profound impact on both survey theory and practice. This article will cover the topics of balanced sampling using the cube method, calibration, the weight-sharing method, the development of variance expressions of complex estimators using influence function and quota sampling.Release date: 2024-01-03
- Articles and reports: 12-001-X202200100006Description:
In the last two decades, survey response rates have been steadily falling. In that context, it has become increasingly important for statistical agencies to develop and use methods that reduce the adverse effects of non-response on the accuracy of survey estimates. Follow-up of non-respondents may be an effective, albeit time and resource-intensive, remedy for non-response bias. We conducted a simulation study using real business survey data to shed some light on several questions about non-response follow-up. For instance, assuming a fixed non-response follow-up budget, what is the best way to select non-responding units to be followed up? How much effort should be dedicated to repeatedly following up non-respondents until a response is received? Should they all be followed up or a sample of them? If a sample is followed up, how should it be selected? We compared Monte Carlo relative biases and relative root mean square errors under different follow-up sampling designs, sample sizes and non-response scenarios. We also determined an expression for the minimum follow-up sample size required to expend the budget, on average, and showed that it maximizes the expected response rate. A main conclusion of our simulation experiment is that this sample size also appears to approximately minimize the bias and mean square error of the estimates.
Release date: 2022-06-21 - Articles and reports: 12-001-X202100100009Description:
Predictive mean matching is a commonly used imputation procedure for addressing the problem of item nonresponse in surveys. The customary approach relies upon the specification of a single outcome regression model. In this note, we propose a novel predictive mean matching procedure that allows the user to specify multiple outcome regression models. The resulting estimator is multiply robust in the sense that it remains consistent if one of the specified outcome regression models is correctly specified. The results from a simulation study suggest that the proposed method performs well in terms of bias and efficiency.
Release date: 2021-06-24 - Articles and reports: 12-001-X201600214662Description:
Two-phase sampling designs are often used in surveys when the sampling frame contains little or no auxiliary information. In this note, we shed some light on the concept of invariance, which is often mentioned in the context of two-phase sampling designs. We define two types of invariant two-phase designs: strongly invariant and weakly invariant two-phase designs. Some examples are given. Finally, we describe the implications of strong and weak invariance from an inference point of view.
Release date: 2016-12-20 - 5. A method of determining the winsorization threshold, with an application to domain estimation ArchivedArticles and reports: 12-001-X201500114199Description:
In business surveys, it is not unusual to collect economic variables for which the distribution is highly skewed. In this context, winsorization is often used to treat the problem of influential values. This technique requires the determination of a constant that corresponds to the threshold above which large values are reduced. In this paper, we consider a method of determining the constant which involves minimizing the largest estimated conditional bias in the sample. In the context of domain estimation, we also propose a method of ensuring consistency between the domain-level winsorized estimates and the population-level winsorized estimate. The results of two simulation studies suggest that the proposed methods lead to winsorized estimators that have good bias and relative efficiency properties.
Release date: 2015-06-29 - Articles and reports: 12-001-X201000211385Description:
In this short note, we show that simple random sampling without replacement and Bernoulli sampling have approximately the same entropy when the population size is large. An empirical example is given as an illustration.
Release date: 2010-12-21 - Articles and reports: 12-001-X201000111246Description:
Many surveys employ weight adjustment procedures to reduce nonresponse bias. These adjustments make use of available auxiliary data. This paper addresses the issue of jackknife variance estimation for estimators that have been adjusted for nonresponse. Using the reverse approach for variance estimation proposed by Fay (1991) and Shao and Steel (1999), we study the effect of not re-calculating the nonresponse weight adjustment within each jackknife replicate. We show that the resulting 'shortcut' jackknife variance estimator tends to overestimate the true variance of point estimators in the case of several weight adjustment procedures used in practice. These theoretical results are confirmed through a simulation study where we compare the shortcut jackknife variance estimator with the full jackknife variance estimator obtained by re-calculating the nonresponse weight adjustment within each jackknife replicate.
Release date: 2010-06-29 - Articles and reports: 11-536-X200900110812Description:
Variance estimation in the presence of imputed data has been widely studied in the literature. It is well known that treating the imputed values as if they were observed could lead to serious underestimation of the variance of the imputed estimator. Several approaches/techniques have been developed in recent years. In particular, Rao and Shao (1992) have proposed an adjusted jackknife that works well when the sampling fraction is small. However, in many situations, this condition is not satisfied. As a result, the Rao-Shao adjusted jackknife may lead to invalid variance estimators. To overcome this problem, Lee, Rancourt and Särndal (1995) have proposed a simple correction to the Rao-Shao adjusted jackknife. In this presentation, we discuss the properties of the resulting variance estimator under stratified simple random sampling without replacement. Also, using the reverse approach developed by Shao and Steel (1999), we consider another variance estimator that works well when the sampling fractions are not negligible. The case of unequal probability sampling designs such as proportional-to-size-designs will be briefly discussed.
Release date: 2009-08-11 - Articles and reports: 12-001-X200700210493Description:
In this paper, we study the problem of variance estimation for a ratio of two totals when marginal random hot deck imputation has been used to fill in missing data. We consider two approaches to inference. In the first approach, the validity of an imputation model is required. In the second approach, the validity of an imputation model is not required but response probabilities need to be estimated, in which case the validity of a nonresponse model is required. We derive variance estimators under two distinct frameworks: the customary two-phase framework and the reverse framework.
Release date: 2008-01-03 - Articles and reports: 12-001-X20060019257Description:
In the presence of item nonreponse, two approaches have been traditionally used to make inference on parameters of interest. The first approach assumes uniform response within imputation cells whereas the second approach assumes ignorable response but make use of a model on the variable of interest as the basis for inference. In this paper, we propose a third appoach that assumes a specified ignorable response mechanism without having to specify a model on the variable of interest. In this case, we show how to obtain imputed values which lead to estimators of a total that are approximately unbiased under the proposed approach as well as the second approach. Variance estimators of the imputed estimators that are approximately unbiased are also obtained using an approach of Fay (1991) in which the order of sampling and response is reversed. Finally, simulation studies are conducted to investigate the finite sample performance of the methods in terms of bias and mean square error.
Release date: 2006-07-20
Stats in brief (0)
Stats in brief (0) (0 results)
No content available at this time.
Articles and reports (13)
Articles and reports (13) (0 to 10 of 13 results)
- Articles and reports: 12-001-X202300200017Description: Jean-Claude Deville, who passed away in October 2021, was one of the most influential researchers in the field of survey statistics over the past 40 years. This article traces some of his contributions that have had a profound impact on both survey theory and practice. This article will cover the topics of balanced sampling using the cube method, calibration, the weight-sharing method, the development of variance expressions of complex estimators using influence function and quota sampling.Release date: 2024-01-03
- Articles and reports: 12-001-X202200100006Description:
In the last two decades, survey response rates have been steadily falling. In that context, it has become increasingly important for statistical agencies to develop and use methods that reduce the adverse effects of non-response on the accuracy of survey estimates. Follow-up of non-respondents may be an effective, albeit time and resource-intensive, remedy for non-response bias. We conducted a simulation study using real business survey data to shed some light on several questions about non-response follow-up. For instance, assuming a fixed non-response follow-up budget, what is the best way to select non-responding units to be followed up? How much effort should be dedicated to repeatedly following up non-respondents until a response is received? Should they all be followed up or a sample of them? If a sample is followed up, how should it be selected? We compared Monte Carlo relative biases and relative root mean square errors under different follow-up sampling designs, sample sizes and non-response scenarios. We also determined an expression for the minimum follow-up sample size required to expend the budget, on average, and showed that it maximizes the expected response rate. A main conclusion of our simulation experiment is that this sample size also appears to approximately minimize the bias and mean square error of the estimates.
Release date: 2022-06-21 - Articles and reports: 12-001-X202100100009Description:
Predictive mean matching is a commonly used imputation procedure for addressing the problem of item nonresponse in surveys. The customary approach relies upon the specification of a single outcome regression model. In this note, we propose a novel predictive mean matching procedure that allows the user to specify multiple outcome regression models. The resulting estimator is multiply robust in the sense that it remains consistent if one of the specified outcome regression models is correctly specified. The results from a simulation study suggest that the proposed method performs well in terms of bias and efficiency.
Release date: 2021-06-24 - Articles and reports: 12-001-X201600214662Description:
Two-phase sampling designs are often used in surveys when the sampling frame contains little or no auxiliary information. In this note, we shed some light on the concept of invariance, which is often mentioned in the context of two-phase sampling designs. We define two types of invariant two-phase designs: strongly invariant and weakly invariant two-phase designs. Some examples are given. Finally, we describe the implications of strong and weak invariance from an inference point of view.
Release date: 2016-12-20 - 5. A method of determining the winsorization threshold, with an application to domain estimation ArchivedArticles and reports: 12-001-X201500114199Description:
In business surveys, it is not unusual to collect economic variables for which the distribution is highly skewed. In this context, winsorization is often used to treat the problem of influential values. This technique requires the determination of a constant that corresponds to the threshold above which large values are reduced. In this paper, we consider a method of determining the constant which involves minimizing the largest estimated conditional bias in the sample. In the context of domain estimation, we also propose a method of ensuring consistency between the domain-level winsorized estimates and the population-level winsorized estimate. The results of two simulation studies suggest that the proposed methods lead to winsorized estimators that have good bias and relative efficiency properties.
Release date: 2015-06-29 - Articles and reports: 12-001-X201000211385Description:
In this short note, we show that simple random sampling without replacement and Bernoulli sampling have approximately the same entropy when the population size is large. An empirical example is given as an illustration.
Release date: 2010-12-21 - Articles and reports: 12-001-X201000111246Description:
Many surveys employ weight adjustment procedures to reduce nonresponse bias. These adjustments make use of available auxiliary data. This paper addresses the issue of jackknife variance estimation for estimators that have been adjusted for nonresponse. Using the reverse approach for variance estimation proposed by Fay (1991) and Shao and Steel (1999), we study the effect of not re-calculating the nonresponse weight adjustment within each jackknife replicate. We show that the resulting 'shortcut' jackknife variance estimator tends to overestimate the true variance of point estimators in the case of several weight adjustment procedures used in practice. These theoretical results are confirmed through a simulation study where we compare the shortcut jackknife variance estimator with the full jackknife variance estimator obtained by re-calculating the nonresponse weight adjustment within each jackknife replicate.
Release date: 2010-06-29 - Articles and reports: 11-536-X200900110812Description:
Variance estimation in the presence of imputed data has been widely studied in the literature. It is well known that treating the imputed values as if they were observed could lead to serious underestimation of the variance of the imputed estimator. Several approaches/techniques have been developed in recent years. In particular, Rao and Shao (1992) have proposed an adjusted jackknife that works well when the sampling fraction is small. However, in many situations, this condition is not satisfied. As a result, the Rao-Shao adjusted jackknife may lead to invalid variance estimators. To overcome this problem, Lee, Rancourt and Särndal (1995) have proposed a simple correction to the Rao-Shao adjusted jackknife. In this presentation, we discuss the properties of the resulting variance estimator under stratified simple random sampling without replacement. Also, using the reverse approach developed by Shao and Steel (1999), we consider another variance estimator that works well when the sampling fractions are not negligible. The case of unequal probability sampling designs such as proportional-to-size-designs will be briefly discussed.
Release date: 2009-08-11 - Articles and reports: 12-001-X200700210493Description:
In this paper, we study the problem of variance estimation for a ratio of two totals when marginal random hot deck imputation has been used to fill in missing data. We consider two approaches to inference. In the first approach, the validity of an imputation model is required. In the second approach, the validity of an imputation model is not required but response probabilities need to be estimated, in which case the validity of a nonresponse model is required. We derive variance estimators under two distinct frameworks: the customary two-phase framework and the reverse framework.
Release date: 2008-01-03 - Articles and reports: 12-001-X20060019257Description:
In the presence of item nonreponse, two approaches have been traditionally used to make inference on parameters of interest. The first approach assumes uniform response within imputation cells whereas the second approach assumes ignorable response but make use of a model on the variable of interest as the basis for inference. In this paper, we propose a third appoach that assumes a specified ignorable response mechanism without having to specify a model on the variable of interest. In this case, we show how to obtain imputed values which lead to estimators of a total that are approximately unbiased under the proposed approach as well as the second approach. Variance estimators of the imputed estimators that are approximately unbiased are also obtained using an approach of Fay (1991) in which the order of sampling and response is reversed. Finally, simulation studies are conducted to investigate the finite sample performance of the methods in terms of bias and mean square error.
Release date: 2006-07-20
Journals and periodicals (0)
Journals and periodicals (0) (0 results)
No content available at this time.
- Date modified: