Sort Help
entries

Results

All (107)

All (107) (40 to 50 of 107 results)

  • Articles and reports: 11-522-X20010016272
    Description:

    This paper discusses in detail issues dealing with the technical aspects of designing and conducting surveys. It is intended for an audience of survey methodologists.

    The French survey of homeless people using support services is unique because of its scope and the conditions under which it was conducted. About 4,000 users of shelters and soup kitchens were surveyed in January and February 2001. Because some users move from one service point to another, it was necessary to collect precise data on the number of times each respondent used such services (meals and person-nights) during the week preceding the survey. Data quality is extremely important since it has a major impact on the sampling weight assigned to each individual.

    Release date: 2002-09-12

  • Articles and reports: 11-522-X20010016273
    Description:

    This paper discusses in detail issues dealing with the technical aspects of designing and conducting surveys. It is intended for an audience of survey methodologists.

    For a multivariate survey based on simple random sampling, the problem of calculating an optimal sampling size becomes one of solving a stochastic programming problem in which each constraint corresponds to a bounded estimate of the variance for a commodity. The problem is stochastic because the set of data collected from a previous survey makes the components of each constraint random variables; consequently, the calculated size of a sample is itself a random variable and is dependent on the quality of that set of data. By means of a Monte Carlo technique, an empirical probability distribution of the optimal sampling size can be produced for finding the probability of the event that the prescribed precision will be achieved. Corresponding to each set of previously collected data, there is an optimal size and allocation across strata. While reviewing these over several consecutive periods of time, it may be possible to identify troublesome strata and to see a trend in the stability of the data. The review may reveal an oscillatory pattern in the sizes of the samples that might have evolved over time due to the dependency of one allocation on another.

    Release date: 2002-09-12

  • Articles and reports: 11-522-X20010016274
    Description:

    This paper discusses in detail issues dealing with the technical aspects of designing and conducting surveys. It is intended for an audience of survey methodologists.

    Since the late 1950s, the probability surveys in the manufacturing sector within the Manufacturing and Construction Division (MCD) had been almost exclusively selected by using Poisson sampling with unit probabilities assigned proportionate to some measure of size. Poisson sampling has the advantage of simplistic variance calculations. Its disadvantage is that the sample size is a random variable, thus adding an additional (and usually positive) component of variance to the survey estimates. In the 1998 survey year, MCD initiated the use of the modified Tillé sampling procedure in some of its surveys. This sampling procedure is used when there is unequal probability of selection and the sample size is fixed. This paper briefly describes this modified procedure and some of its features, and for a variety of dissimilar surveys, itcontrasts variance results obtained using the Tillé procedure to those resulting from the earlier Poisson procedure.

    Release date: 2002-09-12

  • Articles and reports: 11-522-X20010016275
    Description:

    This paper discusses in detail issues dealing with the technical aspects of designing and conducting surveys. It is intended for an audience of survey methodologists.

    Hot deck imputation, in which missing items are replaced with values from respondents, is often used in survey sampling. A model supporting such procedures is the model in which response probabilities are assumed equal within imputation cells. In this paper, an efficient version of hot deck imputation is described, as are the variance of the efficient version derived under the cell response model and an approximation to the fully efficient procedure in which a small number of values are imputed for each non-respondent, respectively. Variance estimation procedures are presented and illustrated in a Monte Carlo study.

    Release date: 2002-09-12

  • Articles and reports: 11-522-X20010016276
    Description:

    In surveys where interviewers need a high degree of specialist knowledge and training, one is often forced to make do with a small number of highly trained people, each having a high case load. It is well known that this can lead to interviewer variability having a relatively large impact on the total error, particularly for estimates of simple quantities such as means and proportions. In a previous paper (Davis and Scott, 1995) the impact for continuous responses was looked at using a linear components of variance model. However, most responses in health questionnaires are binary and it is known that this approach results in underestimating the intra-cluster and intra-interviewer correlations for binary responses. In this paper,a multi-level binary model is used to explore the impact of interviewer variability on estimated proportions.

    Release date: 2002-09-12

  • Articles and reports: 11-522-X20010016277
    Description:

    This paper discusses in detail issues dealing with the technical aspects of designing and conducting surveys. It is intended for an audience of survey methodologists.

    The advent of computerized record-linkage methodology has facilitated the conduct of cohort mortality studies in which exposure data in one database are electronically linked with mortality data from another database. In this article, the impact of linkage errors on estimates of epidemiological indicators of risk, such as standardized mortality ratios and relative risk regression model parameters, is explored. It is shown that these indicators can be subject to bias and additional variability in the presence of linkage errors, with false links and non-links leading to positive and negative bias, respectively, in estimates of the standardized mortality ratio. Although linkage errors always increase the uncertainty in the estimates, bias can be effectively eliminated in the special case in which the false positive rate equals the false negative rate within homogeneous states defined by cross-classification of the covariates of interest.

    Release date: 2002-09-12

  • Articles and reports: 11-522-X20010016278
    Description:

    This paper discusses in detail issues dealing with the technical aspects of designing and conducting surveys. It is intended for an audience of survey methodologists.

    The first round on quality reporting of the statistics produced in Eurostat has almost been completed. This paper presents the experiences so far and, in particular, some of the methodological problems encountered when measuring the quality of the statistics that are produced for international comparisons. A proposal is also presented for indicators that summarize the detailed information provided in these quality reports. Two sets of indicator are discussed: the first more producer-oriented, the second more user-oriented.

    Release date: 2002-09-12

  • Articles and reports: 11-522-X20010016279
    Description:

    This paper discusses in detail issues dealing with the technical aspects of designing and conducting surveys. It is intended for an audience of survey methodologists.

    Rather than having to rely on traditional measures of survey quality, such as response rates, the Social Survey Division of the U.K. Office for National Statistics has been looking for alternative ways to report on quality. In order to achieve this, all the processes involved throughout the lifetime of a survey, from sampling and questionnaire design through to production of the finished report, have been mapped out. Having done this, we have been able to find quality indicators for many of these processes. By using this approach, we hope to be able to appraise any changes to our processes as well as to inform our customers of the quality of the work we carry out.

    Release date: 2002-09-12

  • Articles and reports: 11-522-X20010016280
    Description:

    This paper discusses in detail issues dealing with the technical aspects of designing and conducting surveys. It is intended for an audience of survey methodologists.

    Survey response rates serve as one key measure of the quality of a data set. However, they are only useful to a statistical agency in the evaluation of ongoing data collections if they are based on a predefined set of formulas and definitions that are uniformly applied across all data collections.

    In anticipation of a revision of the current National Center for Education Statistics (NCES) statistical standards, several agency-wide audits of statistical practices were undertaken in the late 1990s. In particular, a compendium documenting major survey design parameters of NCES surveys was drafted. Related to this, NCES conducted a targeted audit of the consistency in response rate calculations across these surveys.

    Although NCES has had written statistical standards since 1988, the audit of the reported response rates from 50 survey components in 14 NCES surveys revealed considerable variability in procedures used to calculate response rates. During the course of the response rate audit, the Statistical Standards Program staff concluded that the organization of the 1992 Standards made it difficult to find all of the information associated with response rates in the standards. In fact, there are references to response rate in a number of separate standards scattered throughout the 1992 Statistical Standards.

    Release date: 2002-09-12

  • Articles and reports: 11-522-X20010016281
    Description:

    This paper discusses in detail issues dealing with the technical aspects of designing and conducting surveys. It is intended for an audience of survey methodologists.

    Methodology for estimating the sampling error of the non-seasonally adjusted estimate of level of the Index of Production (IoP) has previously been developed using Taylor linearization and parametric bootstrap methods, with both producing comparable results. From the study, it was considered that the parametric bootstrap approach would be more practical to implement. This paper describes the methodology that is being developed to estimate the sampling error of the non-seasonally adjusted IoP change using the parametric bootstrap method, along with the data that are needed from the contributing surveys, the assumptions made, and the practical problems encountered during development.

    Release date: 2002-09-12
Stats in brief (1)

Stats in brief (1) ((1 result))

  • Stats in brief: 13-604-M2002039
    Description:

    The latest annual results for the US/Canada purchasing power parities (PPPs) and real expenditures per head in the US compared with Canada are published in this paper. The data were developed for the period 1992 to 2001, using the latest US and Canada expenditure data from the National Accounts and price comparisons for 1999. The paper contains summaries of differences between the results of the multilateral (OECD) study and the Statistics Canada bilateral study. Some differences in classifications have been incorporated, as well as normal national Accounts revisions. Ten tables are presented in an Appendix for 21 categories of expenditure for the GDP.

    Release date: 2002-06-28
Articles and reports (105)

Articles and reports (105) (40 to 50 of 105 results)

  • Articles and reports: 11-522-X20010016272
    Description:

    This paper discusses in detail issues dealing with the technical aspects of designing and conducting surveys. It is intended for an audience of survey methodologists.

    The French survey of homeless people using support services is unique because of its scope and the conditions under which it was conducted. About 4,000 users of shelters and soup kitchens were surveyed in January and February 2001. Because some users move from one service point to another, it was necessary to collect precise data on the number of times each respondent used such services (meals and person-nights) during the week preceding the survey. Data quality is extremely important since it has a major impact on the sampling weight assigned to each individual.

    Release date: 2002-09-12

  • Articles and reports: 11-522-X20010016273
    Description:

    This paper discusses in detail issues dealing with the technical aspects of designing and conducting surveys. It is intended for an audience of survey methodologists.

    For a multivariate survey based on simple random sampling, the problem of calculating an optimal sampling size becomes one of solving a stochastic programming problem in which each constraint corresponds to a bounded estimate of the variance for a commodity. The problem is stochastic because the set of data collected from a previous survey makes the components of each constraint random variables; consequently, the calculated size of a sample is itself a random variable and is dependent on the quality of that set of data. By means of a Monte Carlo technique, an empirical probability distribution of the optimal sampling size can be produced for finding the probability of the event that the prescribed precision will be achieved. Corresponding to each set of previously collected data, there is an optimal size and allocation across strata. While reviewing these over several consecutive periods of time, it may be possible to identify troublesome strata and to see a trend in the stability of the data. The review may reveal an oscillatory pattern in the sizes of the samples that might have evolved over time due to the dependency of one allocation on another.

    Release date: 2002-09-12

  • Articles and reports: 11-522-X20010016274
    Description:

    This paper discusses in detail issues dealing with the technical aspects of designing and conducting surveys. It is intended for an audience of survey methodologists.

    Since the late 1950s, the probability surveys in the manufacturing sector within the Manufacturing and Construction Division (MCD) had been almost exclusively selected by using Poisson sampling with unit probabilities assigned proportionate to some measure of size. Poisson sampling has the advantage of simplistic variance calculations. Its disadvantage is that the sample size is a random variable, thus adding an additional (and usually positive) component of variance to the survey estimates. In the 1998 survey year, MCD initiated the use of the modified Tillé sampling procedure in some of its surveys. This sampling procedure is used when there is unequal probability of selection and the sample size is fixed. This paper briefly describes this modified procedure and some of its features, and for a variety of dissimilar surveys, itcontrasts variance results obtained using the Tillé procedure to those resulting from the earlier Poisson procedure.

    Release date: 2002-09-12

  • Articles and reports: 11-522-X20010016275
    Description:

    This paper discusses in detail issues dealing with the technical aspects of designing and conducting surveys. It is intended for an audience of survey methodologists.

    Hot deck imputation, in which missing items are replaced with values from respondents, is often used in survey sampling. A model supporting such procedures is the model in which response probabilities are assumed equal within imputation cells. In this paper, an efficient version of hot deck imputation is described, as are the variance of the efficient version derived under the cell response model and an approximation to the fully efficient procedure in which a small number of values are imputed for each non-respondent, respectively. Variance estimation procedures are presented and illustrated in a Monte Carlo study.

    Release date: 2002-09-12

  • Articles and reports: 11-522-X20010016276
    Description:

    In surveys where interviewers need a high degree of specialist knowledge and training, one is often forced to make do with a small number of highly trained people, each having a high case load. It is well known that this can lead to interviewer variability having a relatively large impact on the total error, particularly for estimates of simple quantities such as means and proportions. In a previous paper (Davis and Scott, 1995) the impact for continuous responses was looked at using a linear components of variance model. However, most responses in health questionnaires are binary and it is known that this approach results in underestimating the intra-cluster and intra-interviewer correlations for binary responses. In this paper,a multi-level binary model is used to explore the impact of interviewer variability on estimated proportions.

    Release date: 2002-09-12

  • Articles and reports: 11-522-X20010016277
    Description:

    This paper discusses in detail issues dealing with the technical aspects of designing and conducting surveys. It is intended for an audience of survey methodologists.

    The advent of computerized record-linkage methodology has facilitated the conduct of cohort mortality studies in which exposure data in one database are electronically linked with mortality data from another database. In this article, the impact of linkage errors on estimates of epidemiological indicators of risk, such as standardized mortality ratios and relative risk regression model parameters, is explored. It is shown that these indicators can be subject to bias and additional variability in the presence of linkage errors, with false links and non-links leading to positive and negative bias, respectively, in estimates of the standardized mortality ratio. Although linkage errors always increase the uncertainty in the estimates, bias can be effectively eliminated in the special case in which the false positive rate equals the false negative rate within homogeneous states defined by cross-classification of the covariates of interest.

    Release date: 2002-09-12

  • Articles and reports: 11-522-X20010016278
    Description:

    This paper discusses in detail issues dealing with the technical aspects of designing and conducting surveys. It is intended for an audience of survey methodologists.

    The first round on quality reporting of the statistics produced in Eurostat has almost been completed. This paper presents the experiences so far and, in particular, some of the methodological problems encountered when measuring the quality of the statistics that are produced for international comparisons. A proposal is also presented for indicators that summarize the detailed information provided in these quality reports. Two sets of indicator are discussed: the first more producer-oriented, the second more user-oriented.

    Release date: 2002-09-12

  • Articles and reports: 11-522-X20010016279
    Description:

    This paper discusses in detail issues dealing with the technical aspects of designing and conducting surveys. It is intended for an audience of survey methodologists.

    Rather than having to rely on traditional measures of survey quality, such as response rates, the Social Survey Division of the U.K. Office for National Statistics has been looking for alternative ways to report on quality. In order to achieve this, all the processes involved throughout the lifetime of a survey, from sampling and questionnaire design through to production of the finished report, have been mapped out. Having done this, we have been able to find quality indicators for many of these processes. By using this approach, we hope to be able to appraise any changes to our processes as well as to inform our customers of the quality of the work we carry out.

    Release date: 2002-09-12

  • Articles and reports: 11-522-X20010016280
    Description:

    This paper discusses in detail issues dealing with the technical aspects of designing and conducting surveys. It is intended for an audience of survey methodologists.

    Survey response rates serve as one key measure of the quality of a data set. However, they are only useful to a statistical agency in the evaluation of ongoing data collections if they are based on a predefined set of formulas and definitions that are uniformly applied across all data collections.

    In anticipation of a revision of the current National Center for Education Statistics (NCES) statistical standards, several agency-wide audits of statistical practices were undertaken in the late 1990s. In particular, a compendium documenting major survey design parameters of NCES surveys was drafted. Related to this, NCES conducted a targeted audit of the consistency in response rate calculations across these surveys.

    Although NCES has had written statistical standards since 1988, the audit of the reported response rates from 50 survey components in 14 NCES surveys revealed considerable variability in procedures used to calculate response rates. During the course of the response rate audit, the Statistical Standards Program staff concluded that the organization of the 1992 Standards made it difficult to find all of the information associated with response rates in the standards. In fact, there are references to response rate in a number of separate standards scattered throughout the 1992 Statistical Standards.

    Release date: 2002-09-12

  • Articles and reports: 11-522-X20010016281
    Description:

    This paper discusses in detail issues dealing with the technical aspects of designing and conducting surveys. It is intended for an audience of survey methodologists.

    Methodology for estimating the sampling error of the non-seasonally adjusted estimate of level of the Index of Production (IoP) has previously been developed using Taylor linearization and parametric bootstrap methods, with both producing comparable results. From the study, it was considered that the parametric bootstrap approach would be more practical to implement. This paper describes the methodology that is being developed to estimate the sampling error of the non-seasonally adjusted IoP change using the parametric bootstrap method, along with the data that are needed from the contributing surveys, the assumptions made, and the practical problems encountered during development.

    Release date: 2002-09-12
Journals and periodicals (1)

Journals and periodicals (1) ((1 result))

  • Journals and periodicals: 85F0036X
    Geography: Canada
    Description:

    This study documents the methodological and technical challenges that are involved in performing analysis on small groups using a sample survey, oversampling, response rate, non-response rate due to language, release feasibility and sampling variability. It is based on the 1999 General Social Survey (GSS) on victimization.

    Release date: 2002-05-14
Date modified: