Sort Help
entries

Results

All (49)

All (49) (0 to 10 of 49 results)

  • Articles and reports: 12-001-X199000214527
    Description:

    The United States’ National Crime Survey is a large-scale, household survey used to provide estimates of victimizations. The National Crime Survey uses a rotating panel design under which sampled housing units are maintained in the sample for three-and-one-half years with residents of the housing units being interviewed every six months. Nonresponse is a serious problem in longitudinal data from the National Crime Survey since as few as 25% of all individuals interviewed for the survey are respondents over an entire three-and-one-half-year period. In addition, the nonresponse typically does not occur at random with respect to victimization status. This paper presents models for gross flows among two types of victimization reporting classifications: number of victimizations and seriousness of victimization. The models allow for random or nonrandom nonresponse mechanisms, and allow the probabilities underlying the gross flows to be either unconstrained or symmetric. The models are fit, using maximum likelihood estimation, to the data from the National Crime Survey.

    Release date: 1990-12-14

  • Articles and reports: 12-001-X199000214528
    Description:

    Panel responses to the U.S. Consumer Expenditure Interview Survey are compared, to assess the magnitude of telescoping in the unbounded first wave. Analysis of selected expense categories confirms other studies’ findings that telescoping can be considerable in unbounded interviews and tends to vary by type of expense. In addition, estimates from the first wave are found to be greater than estimates derived from subsequent waves, even after telescoping effects are deducted, and much of these effects can be attributed to the shorter recall period in the first wave of this survey.

    Release date: 1990-12-14

  • Articles and reports: 12-001-X199000214529
    Description:

    The Canadian Labour Force Survey uses the rotation panel design. Every month, one sixth of the sample rotates and five sixths remain. Hence, under this rotation scheme, once a rotation panel enters in the sample, it stays 6 months in the sample before it rotates out. Because of this design feature and the way of selecting the rotate-in panel, the estimates based on the panels in the same or different months are correlated. The correlation between two panel estimates is called the panel correlation. Three kinds of panel correlations are defined in this paper: (1) the correlation (denoted by \rho) between estimates for the same characteristic based on the same panel in different months; (2) the correlation (denoted by \gamma) between estimates of the same characteristic based on geographically neighboring panels in different months; (3) the correlation (denoted by \tau) between estimates of different characteristics based on the same panel in the same or different months. This paper describes a methodology for estimating these panel correlations and presents estimated correlations for selected variables using 1980-81 and 1985-87 data with some discussion.

    Release date: 1990-12-14

  • Articles and reports: 12-001-X199000214530
    Description:

    For a class of linear unbiased estimators in a class of sampling schemes, it is shown that one can forget the weights used for sample selection while estimating a population ratio by a ratio of two unbiased estimators, respectively of the numerator and the denominator defining the population ratio. This class of schemes includes commonly used sampling schemes such as unequal probability sampling with or without replacement, stratified proportional allocation sampling with unequal selection probabilities and without replacement in each stratum, etc.

    Release date: 1990-12-14

  • Articles and reports: 12-001-X199000214531
    Description:

    Benchmarking is a method of improving estimates from a sub-annual survey with the help of corresponding estimates from an annual survey. For example, estimates of monthly retail sales might be improved using estimates from the annual survey. This article deals, first with the problem posed by the benchmarking of time series produced by economic surveys, and then reviews the most relevant methods for solving this problem. Next, two new statistical methods are proposed, based on a non-linear model for sub-annual data. The benchmarked estimates are then obtained by applying weighted least squares.

    Release date: 1990-12-14

  • Articles and reports: 12-001-X199000214532
    Description:

    Births by census division are studied via graphs and maps for the province of Saskatchewan for the years 1986-87. The goal of the work is to see how births are related to time and geography by obtaining contour maps that display the birth phenomenon in a smooth fashion. A principal difficulty arising is that the data are aggregate. A secondary goal is to examine the extent to which the Poisson-lognormal can replace for data that are counts, the normal regression model for continuous variates. To this end a hierarchy of models for count-valued random variates are fit to the birth data by maximum likelihood. These models include: the simple Poisson, the Poisson with year and weekday effects and the Poisson-lognormal with year and weekday effects. The use of the Poisson-lognormal is motivated by the idea that important covariates are unavailable to include in the fitting. As the discussion indicates, the work is preliminary.

    Release date: 1990-12-14

  • Articles and reports: 12-001-X199000214533
    Description:

    A commonly used model for the analysis of time series models is the seasonal ARIMA model. However, the survey errors of the input data are usually ignored in the analysis. We show, through the use of state-space models with partially improper initial conditions, how to estimate the unknown parameters of this model using maximum likelihood methods. As well, the survey estimates can be smoothed using an empirical Bayes framework and model validation can be performed. We apply these techniques to an unemployment series from the Labour Force Survey.

    Release date: 1990-12-14

  • Articles and reports: 12-001-X199000214534
    Description:

    The common approach to small area estimation is to exploit the cross-sectional relationships of the data in an attempt to borrow information from one small area to assist in the estimation in others. However, in the case of repeated surveys, further gains in efficiency can be secured by modelling the time series properties of the data as well. We illustrate the idea by considering regression models with time varying, cross-sectionally correlated coefficients. The use of past relationships to estimate current means raises the question of how to protect against model breakdowns. We propose a modification which guarantees that the model dependent predictors of aggregates of the small area means coincide with the corresponding survey estimators and we explore the statistical properties of the modification. The proposed procedure is applied to data on home sale prices used for the computation of housing price indexes.

    Release date: 1990-12-14

  • Articles and reports: 12-001-X199000214535
    Description:

    Papers by Scott and Smith (1974) and Scott, Smith, and Jones (1977) suggested the use of signal extraction results from time series analysis to improve estimates in repeated surveys, what we call the time series approach to estimation in repeated surveys. We review the underlying philosophy of this approach, pointing out that it stems from recognition of two sources of variation - time series variation and sampling variation - and that the approach can provide a unifying framework for other problems where the two sources of variation are present. We obtain some theoretical results for the time series approach regarding design consistency of the time series estimators, and uncorrelatedness of the signal and sampling error series. We observe that, from a design-based perspective, the time series approach trades some bias for a reduction in variance and a reduction in average mean squared error relative to classical survey estimators. We briefly discuss modeling to implement the time series approach, and then illustrate the approach by applying it to time series of retail sales of eating places and of drinking places from the U.S. Census Bureau’s Retail Trade Survey.

    Release date: 1990-12-14

  • Articles and reports: 12-001-X199000214536
    Description:

    We discuss frame and sample maintenance issues that arise in recurring surveys. A new system is described that meets four objectives. Through time, it maintains (1) the geographical balance of a sample; (2) the sample size; (3) the unbiased character of estimators; and (4) the lack of distortion in estimated trends. The system is based upon the Peano key, which creates a fractal, space-filling curve. An example of the new system is presented using a national survey of establishments in the United States conducted by the A.C. Nielsen Company.

    Release date: 1990-12-14
Stats in brief (1)

Stats in brief (1) ((1 result))

  • Stats in brief: 75-001-X199000475
    Geography: Canada
    Description:

    In 1989, the Survey of Literacy Skills Used in Daily Activities was conducted to assess the reading and numerous skills of Canada's adult population. This article reports the survey's main findings.

    Release date: 1990-11-27
Articles and reports (48)

Articles and reports (48) (0 to 10 of 48 results)

  • Articles and reports: 12-001-X199000214527
    Description:

    The United States’ National Crime Survey is a large-scale, household survey used to provide estimates of victimizations. The National Crime Survey uses a rotating panel design under which sampled housing units are maintained in the sample for three-and-one-half years with residents of the housing units being interviewed every six months. Nonresponse is a serious problem in longitudinal data from the National Crime Survey since as few as 25% of all individuals interviewed for the survey are respondents over an entire three-and-one-half-year period. In addition, the nonresponse typically does not occur at random with respect to victimization status. This paper presents models for gross flows among two types of victimization reporting classifications: number of victimizations and seriousness of victimization. The models allow for random or nonrandom nonresponse mechanisms, and allow the probabilities underlying the gross flows to be either unconstrained or symmetric. The models are fit, using maximum likelihood estimation, to the data from the National Crime Survey.

    Release date: 1990-12-14

  • Articles and reports: 12-001-X199000214528
    Description:

    Panel responses to the U.S. Consumer Expenditure Interview Survey are compared, to assess the magnitude of telescoping in the unbounded first wave. Analysis of selected expense categories confirms other studies’ findings that telescoping can be considerable in unbounded interviews and tends to vary by type of expense. In addition, estimates from the first wave are found to be greater than estimates derived from subsequent waves, even after telescoping effects are deducted, and much of these effects can be attributed to the shorter recall period in the first wave of this survey.

    Release date: 1990-12-14

  • Articles and reports: 12-001-X199000214529
    Description:

    The Canadian Labour Force Survey uses the rotation panel design. Every month, one sixth of the sample rotates and five sixths remain. Hence, under this rotation scheme, once a rotation panel enters in the sample, it stays 6 months in the sample before it rotates out. Because of this design feature and the way of selecting the rotate-in panel, the estimates based on the panels in the same or different months are correlated. The correlation between two panel estimates is called the panel correlation. Three kinds of panel correlations are defined in this paper: (1) the correlation (denoted by \rho) between estimates for the same characteristic based on the same panel in different months; (2) the correlation (denoted by \gamma) between estimates of the same characteristic based on geographically neighboring panels in different months; (3) the correlation (denoted by \tau) between estimates of different characteristics based on the same panel in the same or different months. This paper describes a methodology for estimating these panel correlations and presents estimated correlations for selected variables using 1980-81 and 1985-87 data with some discussion.

    Release date: 1990-12-14

  • Articles and reports: 12-001-X199000214530
    Description:

    For a class of linear unbiased estimators in a class of sampling schemes, it is shown that one can forget the weights used for sample selection while estimating a population ratio by a ratio of two unbiased estimators, respectively of the numerator and the denominator defining the population ratio. This class of schemes includes commonly used sampling schemes such as unequal probability sampling with or without replacement, stratified proportional allocation sampling with unequal selection probabilities and without replacement in each stratum, etc.

    Release date: 1990-12-14

  • Articles and reports: 12-001-X199000214531
    Description:

    Benchmarking is a method of improving estimates from a sub-annual survey with the help of corresponding estimates from an annual survey. For example, estimates of monthly retail sales might be improved using estimates from the annual survey. This article deals, first with the problem posed by the benchmarking of time series produced by economic surveys, and then reviews the most relevant methods for solving this problem. Next, two new statistical methods are proposed, based on a non-linear model for sub-annual data. The benchmarked estimates are then obtained by applying weighted least squares.

    Release date: 1990-12-14

  • Articles and reports: 12-001-X199000214532
    Description:

    Births by census division are studied via graphs and maps for the province of Saskatchewan for the years 1986-87. The goal of the work is to see how births are related to time and geography by obtaining contour maps that display the birth phenomenon in a smooth fashion. A principal difficulty arising is that the data are aggregate. A secondary goal is to examine the extent to which the Poisson-lognormal can replace for data that are counts, the normal regression model for continuous variates. To this end a hierarchy of models for count-valued random variates are fit to the birth data by maximum likelihood. These models include: the simple Poisson, the Poisson with year and weekday effects and the Poisson-lognormal with year and weekday effects. The use of the Poisson-lognormal is motivated by the idea that important covariates are unavailable to include in the fitting. As the discussion indicates, the work is preliminary.

    Release date: 1990-12-14

  • Articles and reports: 12-001-X199000214533
    Description:

    A commonly used model for the analysis of time series models is the seasonal ARIMA model. However, the survey errors of the input data are usually ignored in the analysis. We show, through the use of state-space models with partially improper initial conditions, how to estimate the unknown parameters of this model using maximum likelihood methods. As well, the survey estimates can be smoothed using an empirical Bayes framework and model validation can be performed. We apply these techniques to an unemployment series from the Labour Force Survey.

    Release date: 1990-12-14

  • Articles and reports: 12-001-X199000214534
    Description:

    The common approach to small area estimation is to exploit the cross-sectional relationships of the data in an attempt to borrow information from one small area to assist in the estimation in others. However, in the case of repeated surveys, further gains in efficiency can be secured by modelling the time series properties of the data as well. We illustrate the idea by considering regression models with time varying, cross-sectionally correlated coefficients. The use of past relationships to estimate current means raises the question of how to protect against model breakdowns. We propose a modification which guarantees that the model dependent predictors of aggregates of the small area means coincide with the corresponding survey estimators and we explore the statistical properties of the modification. The proposed procedure is applied to data on home sale prices used for the computation of housing price indexes.

    Release date: 1990-12-14

  • Articles and reports: 12-001-X199000214535
    Description:

    Papers by Scott and Smith (1974) and Scott, Smith, and Jones (1977) suggested the use of signal extraction results from time series analysis to improve estimates in repeated surveys, what we call the time series approach to estimation in repeated surveys. We review the underlying philosophy of this approach, pointing out that it stems from recognition of two sources of variation - time series variation and sampling variation - and that the approach can provide a unifying framework for other problems where the two sources of variation are present. We obtain some theoretical results for the time series approach regarding design consistency of the time series estimators, and uncorrelatedness of the signal and sampling error series. We observe that, from a design-based perspective, the time series approach trades some bias for a reduction in variance and a reduction in average mean squared error relative to classical survey estimators. We briefly discuss modeling to implement the time series approach, and then illustrate the approach by applying it to time series of retail sales of eating places and of drinking places from the U.S. Census Bureau’s Retail Trade Survey.

    Release date: 1990-12-14

  • Articles and reports: 12-001-X199000214536
    Description:

    We discuss frame and sample maintenance issues that arise in recurring surveys. A new system is described that meets four objectives. Through time, it maintains (1) the geographical balance of a sample; (2) the sample size; (3) the unbiased character of estimators; and (4) the lack of distortion in estimated trends. The system is based upon the Peano key, which creates a fractal, space-filling curve. An example of the new system is presented using a national survey of establishments in the United States conducted by the A.C. Nielsen Company.

    Release date: 1990-12-14
Journals and periodicals (0)

Journals and periodicals (0) (0 results)

No content available at this time.

Date modified: