Filter results by

Search Help
Currently selected filters that can be removed

Keyword(s)

Year of publication

1 facets displayed. 1 facets selected.

Author(s)

11 facets displayed. 0 facets selected.
Sort Help
entries

Results

All (9)

All (9) ((9 results))

  • Articles and reports: 12-001-X199200214480
    Description:

    We consider the problem of estimating the “cost weights” and “relative importances” of different item strata for the local market basket areas. The estimation of these parameters is needed to construct the U.S. Consumer Price Index Numbers. We use multivariate models to construct composite estimators which combine information from relevant sources. The mean squared errors (MSE) of the proposed and the existing estimators are estimated using the repeated half samples available from the survey. Based on our numerical results, the proposed estimators seem to be superior to the existing estimators.

    Release date: 1992-12-15

  • Articles and reports: 12-001-X199200214483
    Description:

    In almost all large surveys, some form of imputation is used. This paper develops a method for variance estimation when single (as opposed to multiple) imputation is used to create a completed data set. Imputation will never reproduce the true values (except in truly exceptional cases). The total error of the survey estimate is viewed in this paper as the sum of sampling error and imputation error. Consequently, an overall variance is derived as the sum of a sampling variance and an imputation variance. The principal theme is the estimation of these two components, using the data after imputation, that is, the actually observed values and the imputed values. The approach is model assisted in the sense that the model implied by the imputation method and the randomization distribution used for sample selection will together determine the appearance of the variance estimators. The theoretical findings are confirmed by a Monte Carlo simulation.

    Release date: 1992-12-15

  • Articles and reports: 12-001-X199200214484
    Description:

    Maximum likelihood estimation from complex sample data requires additional modeling due to the information in the sample selection. Alternatively, pseudo maximum likelihood methods that consist of maximizing estimates of the census score function can be applied. In this article we review some of the approaches considered in the literature and compare them with a new approach derived from the ideas of ‘weighted distributions’. The focus of the comparisons is on situations where some or all of the design variables are unknown or misspecified. The results obtained for the new method are encouraging, but the study is limited so far to simple situations.

    Release date: 1992-12-15

  • Articles and reports: 12-001-X199200214485
    Description:

    Godambe and Thompson (1986) define and develop simultaneous optimal estimation of superpopulation and finite population parameters based on a superpopulation model and a survey sampling design. Their theory defines the finite population parameter, \theta_N, as the solution of the optimal estimating equation for the superpopulation parameter \theta; however, some other finite population parameter, \phi, may be of interest. We propose to extend the superpopulation model in such a way that the parameter of interest, \phi, is a known function of \theta_N, say \phi = f (\theta_N). Then \phi is optimally estimated by f (\theta_s), where \theta_s is the optimal estimator of \theta_N, as given by Godambe and Thompson (1986), based on the sample s and the sampling design.

    Release date: 1992-12-15

  • Articles and reports: 12-001-X199200214486
    Description:

    Resampling methods for inference with complex survey data include the jackknife, balanced repeated replication (BRR) and the bootstrap. We review some recent work on these methods for standard error and confidence interval estimation. Some empirical results for non-smooth statistics are also given.

    Release date: 1992-12-15

  • Articles and reports: 12-001-X199200114492
    Description:

    The scenario considered here is that of a sample survey having the following two major objectives: (1) identification for future follow up studies of n^* subjects in each of H subdomains, and (2) estimation as of this time of conduct of the survey of the level of some characteristic in each of these subdomains. An additional constraint imposed here is that the sample design is restricted to single stage cluster sampling. A variation of single stage cluster sampling called telescopic single stage cluster sampling (TSSCS) had been proposed in an earlier paper (Levy et al. 1989) as a cost effective method of identifying n^* individuals in each sub domain and, in this article, we investigate the statistical properties of TSSCS in crossectional estimation of the level of a population characteristic. In particular, TSSCS is compared to ordinary single stage cluster sampling (OSSCS) with respect to the reliability of estimates at fixed cost. Motivation for this investigation comes from problems faced during the statistical design of the Shanghai Survey of Alzheimer’s Disease and Dementia (SSADD), an epidemiological study of the prevalence and incidence of Alzheimer’s disease and dementia.

    Release date: 1992-06-15

  • Articles and reports: 12-001-X199200114493
    Description:

    This paper examines the suitability of a survey-based procedure for estimating populations in small, rural areas. The procedure is a variation of the Housing Unit Method. It employs the use of local experts enlisted to provide information about the demographic characteristics of households randomly selected from residential unit sample frames developed from utility records. The procedure is nonintrusive and less costly than traditional survey data collection efforts. Because the procedure is based on random sampling, confidence intervals can be constructed around the population estimated by the technique. The results of a case study are provided in which the total population is estimated for three unincorporated communities in rural, southern Nevada.

    Release date: 1992-06-15

  • Articles and reports: 12-001-X199200114496
    Description:

    The Population Estimates Program of Statistics Canada has traditionally been benchmarked to the most recent census, with no allowance for census coverage error. Because of a significant increase in the level of undercoverage in the 1986 Census, however, Statistics Canada is considering the possibility of adjusting the base population of the estimates program for net census undercoverage. This paper develops and compares four estimators of such a base population: the unadjusted census counts, the adjusted census counts, a preliminary test estimator, and a composite estimator. A generalization of previously-proposed risk functions, known as the Weighted Mean Square Error (WMSE), is used as the basis of comparison. The WMSE applies not only to population totals, but to functions of population totals such as population shares and growth rates between censuses. The use of the WMSE to develop and evaluate small-area estimators in the context of census adjustment is also described.

    Release date: 1992-06-15

  • Articles and reports: 12-001-X199200114498
    Description:

    One way to assess the undercount at subnational levels (e.g. the state level) is to obtain sample data from a post-enumeration survey, and then smooth those data based on a linear model of explanatory variables. The relative importance of sampling-error variances to corresponding model-error variances determines the amount of smoothing. Maximum likelihood estimation can lead to oversmoothing, so making the assessment of undercount over-reliant on the linear model. Restricted maximum likelihood (REML) estimators do not suffer from this drawback. Empirical Bayes prediction of undercount based on REML will be presented in this article, and will be compared to maximum likelihood and a method of moments by both simulation and example. Large-sample distributional properties of the REML estimators allow accurate mean squared prediction errors of the REML-based smoothers to be computed.

    Release date: 1992-06-15
Stats in brief (0)

Stats in brief (0) (0 results)

No content available at this time.

Articles and reports (9)

Articles and reports (9) ((9 results))

  • Articles and reports: 12-001-X199200214480
    Description:

    We consider the problem of estimating the “cost weights” and “relative importances” of different item strata for the local market basket areas. The estimation of these parameters is needed to construct the U.S. Consumer Price Index Numbers. We use multivariate models to construct composite estimators which combine information from relevant sources. The mean squared errors (MSE) of the proposed and the existing estimators are estimated using the repeated half samples available from the survey. Based on our numerical results, the proposed estimators seem to be superior to the existing estimators.

    Release date: 1992-12-15

  • Articles and reports: 12-001-X199200214483
    Description:

    In almost all large surveys, some form of imputation is used. This paper develops a method for variance estimation when single (as opposed to multiple) imputation is used to create a completed data set. Imputation will never reproduce the true values (except in truly exceptional cases). The total error of the survey estimate is viewed in this paper as the sum of sampling error and imputation error. Consequently, an overall variance is derived as the sum of a sampling variance and an imputation variance. The principal theme is the estimation of these two components, using the data after imputation, that is, the actually observed values and the imputed values. The approach is model assisted in the sense that the model implied by the imputation method and the randomization distribution used for sample selection will together determine the appearance of the variance estimators. The theoretical findings are confirmed by a Monte Carlo simulation.

    Release date: 1992-12-15

  • Articles and reports: 12-001-X199200214484
    Description:

    Maximum likelihood estimation from complex sample data requires additional modeling due to the information in the sample selection. Alternatively, pseudo maximum likelihood methods that consist of maximizing estimates of the census score function can be applied. In this article we review some of the approaches considered in the literature and compare them with a new approach derived from the ideas of ‘weighted distributions’. The focus of the comparisons is on situations where some or all of the design variables are unknown or misspecified. The results obtained for the new method are encouraging, but the study is limited so far to simple situations.

    Release date: 1992-12-15

  • Articles and reports: 12-001-X199200214485
    Description:

    Godambe and Thompson (1986) define and develop simultaneous optimal estimation of superpopulation and finite population parameters based on a superpopulation model and a survey sampling design. Their theory defines the finite population parameter, \theta_N, as the solution of the optimal estimating equation for the superpopulation parameter \theta; however, some other finite population parameter, \phi, may be of interest. We propose to extend the superpopulation model in such a way that the parameter of interest, \phi, is a known function of \theta_N, say \phi = f (\theta_N). Then \phi is optimally estimated by f (\theta_s), where \theta_s is the optimal estimator of \theta_N, as given by Godambe and Thompson (1986), based on the sample s and the sampling design.

    Release date: 1992-12-15

  • Articles and reports: 12-001-X199200214486
    Description:

    Resampling methods for inference with complex survey data include the jackknife, balanced repeated replication (BRR) and the bootstrap. We review some recent work on these methods for standard error and confidence interval estimation. Some empirical results for non-smooth statistics are also given.

    Release date: 1992-12-15

  • Articles and reports: 12-001-X199200114492
    Description:

    The scenario considered here is that of a sample survey having the following two major objectives: (1) identification for future follow up studies of n^* subjects in each of H subdomains, and (2) estimation as of this time of conduct of the survey of the level of some characteristic in each of these subdomains. An additional constraint imposed here is that the sample design is restricted to single stage cluster sampling. A variation of single stage cluster sampling called telescopic single stage cluster sampling (TSSCS) had been proposed in an earlier paper (Levy et al. 1989) as a cost effective method of identifying n^* individuals in each sub domain and, in this article, we investigate the statistical properties of TSSCS in crossectional estimation of the level of a population characteristic. In particular, TSSCS is compared to ordinary single stage cluster sampling (OSSCS) with respect to the reliability of estimates at fixed cost. Motivation for this investigation comes from problems faced during the statistical design of the Shanghai Survey of Alzheimer’s Disease and Dementia (SSADD), an epidemiological study of the prevalence and incidence of Alzheimer’s disease and dementia.

    Release date: 1992-06-15

  • Articles and reports: 12-001-X199200114493
    Description:

    This paper examines the suitability of a survey-based procedure for estimating populations in small, rural areas. The procedure is a variation of the Housing Unit Method. It employs the use of local experts enlisted to provide information about the demographic characteristics of households randomly selected from residential unit sample frames developed from utility records. The procedure is nonintrusive and less costly than traditional survey data collection efforts. Because the procedure is based on random sampling, confidence intervals can be constructed around the population estimated by the technique. The results of a case study are provided in which the total population is estimated for three unincorporated communities in rural, southern Nevada.

    Release date: 1992-06-15

  • Articles and reports: 12-001-X199200114496
    Description:

    The Population Estimates Program of Statistics Canada has traditionally been benchmarked to the most recent census, with no allowance for census coverage error. Because of a significant increase in the level of undercoverage in the 1986 Census, however, Statistics Canada is considering the possibility of adjusting the base population of the estimates program for net census undercoverage. This paper develops and compares four estimators of such a base population: the unadjusted census counts, the adjusted census counts, a preliminary test estimator, and a composite estimator. A generalization of previously-proposed risk functions, known as the Weighted Mean Square Error (WMSE), is used as the basis of comparison. The WMSE applies not only to population totals, but to functions of population totals such as population shares and growth rates between censuses. The use of the WMSE to develop and evaluate small-area estimators in the context of census adjustment is also described.

    Release date: 1992-06-15

  • Articles and reports: 12-001-X199200114498
    Description:

    One way to assess the undercount at subnational levels (e.g. the state level) is to obtain sample data from a post-enumeration survey, and then smooth those data based on a linear model of explanatory variables. The relative importance of sampling-error variances to corresponding model-error variances determines the amount of smoothing. Maximum likelihood estimation can lead to oversmoothing, so making the assessment of undercount over-reliant on the linear model. Restricted maximum likelihood (REML) estimators do not suffer from this drawback. Empirical Bayes prediction of undercount based on REML will be presented in this article, and will be compared to maximum likelihood and a method of moments by both simulation and example. Large-sample distributional properties of the REML estimators allow accurate mean squared prediction errors of the REML-based smoothers to be computed.

    Release date: 1992-06-15
Journals and periodicals (0)

Journals and periodicals (0) (0 results)

No content available at this time.

Date modified: