Keyword search

Filter results by

Search Help
Currently selected filters that can be removed

Keyword(s)

Year of publication

1 facets displayed. 1 facets selected.

Geography

1 facets displayed. 0 facets selected.

Survey or statistical program

2 facets displayed. 0 facets selected.
Sort Help
entries

Results

All (27)

All (27) (0 to 10 of 27 results)

  • Journals and periodicals: 88-518-X
    Geography: Canada
    Description:

    The food-processing industry benefits from a wide a range of new advanced technologies. Technological advances include computer-based information and control systems, as well as sophisticated processing and packaging methods that enhance product quality, improve food safety and reduce costs. Continuous quality improvement and benchmarking are examples of related business practices.

    This study examines the use of advanced technologies in the food-processing industry. It focuses not just on the incidence and intensity of use of these new technologies but also on the way technology relates to overall firm strategy. It also examines how technology use is affected by selected industry structural characteristics and how the adoption of technologies affects the performance of firms. It considers as well how the environment influences technological change. The nature and structure of the industry are shown to condition the competitive environment, the business strategies that are pursued, product characteristics and the role of technology.

    Firms make strategic choices in light of technological opportunities and the risks and opportunities provided by their competitive environments. They implement strategies through appropriate business practices and activities, including the development of core competencies in the areas of marketing, production and human resources, as well as technology. Firms that differ in size and nationality choose to pursue different technological strategies. This study focuses on how these differences are reflected in the different use of technology for large and small establishments, for foreign and domestic plants and for plants in different industries.

    Release date: 1999-12-20

  • Articles and reports: 11F0019M1999105
    Geography: Canada
    Description:

    This paper outlines the growth in advanced technology use that has taken place over the last decade in Canadian manufacturing establishments. It presents the percentage of plants that use any one of the advanced technologies studied and how this has changed between 1989 and 1998. It also investigates how growth rates in the 1990s have varied across different technologies in specific functional areas, such as design and engineering, fabrication, communications, and integration and control. In an attempt to discover how changes in technology use are related to certain plant characteristics, the paper then investigates whether the growth in technology use varies across plants that differ by size, nationality and industry. Multivariate analysis is used to investigate the joint effects of plant size, foreign ownership and industry on the incidence of technology adoption and how these effects have changed over the last decade.

    Release date: 1999-12-14

  • Surveys and statistical programs – Documentation: 92-371-X
    Description:

    This report deals with sampling and weighting, a process whereby certain characteristics are collected and processed for a random sample of dwellings and persons identified in the complete census enumeration. Data for the whole population are then obtained by scaling up the results for the sample to the full population level. The use of sampling may lead to substantial reductions in costs and respondent burden, or alternatively, can allow the scope of a census to be broadened at the same cost.

    Release date: 1999-12-07

  • Surveys and statistical programs – Documentation: 11-522-X19980015017
    Description:

    Longitudinal studies with repeated observations on individuals permit better characterizations of change and assessment of possible risk factors, but there has been little experience applying sophisticated models for longitudinal data to the complex survey setting. We present results from a comparison of different variance estimation methods for random effects models of change in cognitive function among older adults. The sample design is a stratified sample of people 65 and older, drawn as part of a community-based study designed to examine risk factors for dementia. The model summarizes the population heterogeneity in overall level and rate of change in cognitive function using random effects for intercept and slope. We discuss an unweighted regression including covariates for the stratification variables, a weighted regression, and bootstrapping; we also did preliminary work into using balanced repeated replication and jackknife repeated replication.

    Release date: 1999-10-22

  • Surveys and statistical programs – Documentation: 11-522-X19980015019
    Description:

    The British Labour Force Survey (LFS) is a quarterly household survey with a rotating sample design that can potentially be used to produce longitudinal data, including estimates of labour force gross flows. However, these estimates may be biased due to the effect of non-response. Weighting adjustments are a commonly used method to account for non-response bias. We find that weighting may not fully account for the effect of non-response bias because non-response may depend on the unobserved labour force flows, i.e., the non-response is non-ignorable. To adjust for the effects of non-ignorable non-response, we propose a model for the complex non-response patterns in the LFS which controls for the correlated within-household non-response behaviour found in the survey. The results of modelling suggest that non-response may be non-ignorable in the LFS, causing the weighting estimates to be biased.

    Release date: 1999-10-22

  • Surveys and statistical programs – Documentation: 11-522-X19980015022
    Description:

    This article extends and further develops the method proposed by Pfeffermann, Skinner and Humphreys (1998) for the estimation of gross flows in the presence of classification errors. The main feature of that method is the use of auxiliary information at the individual level which circumvents the need for validation data for estimating the misclassification rates. The new developments in this article are the establishment of conditions for model identification, a study of the properties of a model goodness of fit statistic and modifications to the sample likelihood to account for missing data and informative sampling. The new developments are illustrated by a small Monte-Carlo simulation study.

    Release date: 1999-10-22

  • Surveys and statistical programs – Documentation: 11-522-X19980015028
    Description:

    We address the problem of estimation for the income dynamics statistics calculated from complex longitudinal surveys. In addition, we compare two design-based estimators of longitudinal proportions and transition rates in terms of variability under large attrition rates. One estimator is based on the cross-sectional samples for the estimation of the income class boundaries at each time period and on the longitudinal sample for the estimation of the longitudinal counts; the other estimator is entirely based on the longitudinal sample, both for the estimation of the class boundaries and the longitudinal counts. We develop Taylor linearization-type variance estimators for both the longitudinal and the mixed estimator under the assumption of no change in the population, and for the mixed estimator when there is change.

    Release date: 1999-10-22

  • Surveys and statistical programs – Documentation: 11-522-X19980015030
    Description:

    Two-phase sampling designs have been conducted in waves to estimate the incidence of a rare disease such as dementia. Estimation of disease incidence from longitudinal dementia study has to appropriately adjust for data missing by death as well as the sampling design used at each study wave. In this paper we adopt a selection model approach to model the missing data by death and use a likelihood approach to derive incidence estimates. A modified EM algorithm is used to deal with data missing by sampling selection. The non-paramedic jackknife variance estimator is used to derive variance estimates for the model parameters and the incidence estimates. The proposed approaches are applied to data from the Indianapolis-Ibadan Dementia Study.

    Release date: 1999-10-22

  • Surveys and statistical programs – Documentation: 11-522-X19980015031
    Description:

    The U.S. Third National Health and Nutrition Examination Survey (NHANES III) was carried out from 1988 to 1994. This survey was intended primarily to provide estimates of cross-sectional parameters believed to be approximately constant over the six-year data collection period. However, for some variable (e.g., serum lead, body mass index and smoking behavior), substantive considerations suggest the possible presence of nontrivial changes in level between 1988 and 1994. For these variables, NHANES III is potentially a valuable source of time-change information, compared to other studies involving more restricted populations and samples. Exploration of possible change over time is complicated by two issues. First, there was of practical concern because some variables displayed substantial regional differences in level. This was of practical concern because some variables displayed substantial regional differences in level. Second, nontrivial changes in level over time can lead to nontrivial biases in some customary NHANES III variance estimators. This paper considers these two problems and discusses some related implications for statistical policy.

    Release date: 1999-10-22

  • Surveys and statistical programs – Documentation: 11-522-X19980015033
    Description:

    Victimizations are not randomly scattered through the population, but tend to be concentrated in relatively few victims. Data from the U.S. National Crime Victimization Survey (NCVS), a multistage rotating panel survey, are employed to estimate the conditional probabilities of being a crime victim at time t given the victimization status in earlier interviews. Models are presented and fit to allow use of partial information from households that move in or out of the housing unit during the study period. The estimated probability of being a crime victim at interview t given the status at interview (t-l) is found to decrease with t. Possible implications for estimating cross-sectional victimization rates are discusssed.

    Release date: 1999-10-22
Data (0)

Data (0) (0 results)

No content available at this time.

Analysis (18)

Analysis (18) (0 to 10 of 18 results)

  • Journals and periodicals: 88-518-X
    Geography: Canada
    Description:

    The food-processing industry benefits from a wide a range of new advanced technologies. Technological advances include computer-based information and control systems, as well as sophisticated processing and packaging methods that enhance product quality, improve food safety and reduce costs. Continuous quality improvement and benchmarking are examples of related business practices.

    This study examines the use of advanced technologies in the food-processing industry. It focuses not just on the incidence and intensity of use of these new technologies but also on the way technology relates to overall firm strategy. It also examines how technology use is affected by selected industry structural characteristics and how the adoption of technologies affects the performance of firms. It considers as well how the environment influences technological change. The nature and structure of the industry are shown to condition the competitive environment, the business strategies that are pursued, product characteristics and the role of technology.

    Firms make strategic choices in light of technological opportunities and the risks and opportunities provided by their competitive environments. They implement strategies through appropriate business practices and activities, including the development of core competencies in the areas of marketing, production and human resources, as well as technology. Firms that differ in size and nationality choose to pursue different technological strategies. This study focuses on how these differences are reflected in the different use of technology for large and small establishments, for foreign and domestic plants and for plants in different industries.

    Release date: 1999-12-20

  • Articles and reports: 11F0019M1999105
    Geography: Canada
    Description:

    This paper outlines the growth in advanced technology use that has taken place over the last decade in Canadian manufacturing establishments. It presents the percentage of plants that use any one of the advanced technologies studied and how this has changed between 1989 and 1998. It also investigates how growth rates in the 1990s have varied across different technologies in specific functional areas, such as design and engineering, fabrication, communications, and integration and control. In an attempt to discover how changes in technology use are related to certain plant characteristics, the paper then investigates whether the growth in technology use varies across plants that differ by size, nationality and industry. Multivariate analysis is used to investigate the joint effects of plant size, foreign ownership and industry on the incidence of technology adoption and how these effects have changed over the last decade.

    Release date: 1999-12-14

  • Articles and reports: 12-001-X199900111395
    Description:

    In this Issue is a column where the Editor biefly presents each paper of the current issue of Survey Methodology. As well, it sometimes contain informations on structure or management changes in the journal.

    Release date: 1999-10-08

  • Articles and reports: 12-001-X19990014707
    Description:

    This paper introduces Poisson Mixture sampling, a family of sampling designs so named because each member of the family is a mixture of two Poisson sampling designs, Poisson nps sampling and Bernoulli sampling. These two designs are at opposite ends of a continuous spectrum, indexed by a continuous parameter. Poisson Mixture sampling is conceived for use with the highly skewed populations often arising in business surveys. It gives the statistician a range of different options for the extent of the sample coordination and the control of response burden. Some Poisson Mixture sampling designs give considerably more precise estimates than the usual Poisson nps sampling. This result is noteworthy, because Poisson nps is in itself highly efficient, assuming it is based on a strong measure of size.

    Release date: 1999-10-08

  • Articles and reports: 12-001-X19990014709
    Description:

    We develop an approach to estimating variances for X-11 seasonal adjustments that recognizes the effects of sampling error and errors from forecast extension. In our approach, seasonal adjustment error in the central values of a sufficiently long series results only from the effect of the X-11 filtering on the sampling errors. Towards either end of the series, we also recognize the contribution to seasonal adjustment error from forecast and backcast errors. We extend the approach to produce variances of errors in X-11 trend estimates, and to recognize error in estimation of regression coefficients used to model, e.g., calendar effects. In empirical results, the contribution of sampling error often dominated the seasonal adjustment variances. Trend estimate variances, however, showed large increases at the ends of series due to the effects of fore/backcast error. Nonstationarities in the sampling errors produced striking patterns in the seasonal adjustment and trend estimate variances.

    Release date: 1999-10-08

  • Articles and reports: 12-001-X19990014711
    Description:

    We consider the use of calibration estimators when outliers occur. An extension is obtained for the class of Deville and Särndal (1992) calibration estimators based on Wright (1983) QR estimators. It is also obtained by minimizing a general metric subject to constraints on the calibration variables and weights. As an application, this class of estimators helps us consider robust calibration estimators by choosing parameters carefully. This makes it possible, e.g., for cosmetic reasons, to limit robuts weights to a predetermined interval. The use of robust estimators with a high breakdown point is also considered. In the specific case of the mean square metric, the estimator proposed by the author is a generalization of a Lee (1991) proposition. The new methodology is illustrated by means of a short simulation study.

    Release date: 1999-10-08

  • Articles and reports: 12-001-X19990014712
    Description:

    This paper investigates a repeated sampling approach to take into account auxiliary information in order to improve the precision of estimators. The objective is to build an estimator with a small conditional bias by weighting the observed values by the inverses of the conditional inclusion probabilities. A general approximation is proposed in cases when the auxiliary static is a vector of Horvitz-Thompson estimators. This approximation is quite close to the optimal estimator discussed by Fuller and Isaki (1981), Montanari (1987, 1997), Deville (1992) and Rao (1994, 1997). Next, the optimal estimator is applied to a stratified sampling design and it is shown that the optimal estimator can be viewed as a generalised regression estimator for which the stratification indicator variables are also used at the estimation stage. Finally, the application field of this estimator is discussed in the general context of the use of auxiliary information.

    Release date: 1999-10-08

  • Articles and reports: 12-001-X19990014713
    Description:

    Robust small area estimation is studied under a simple random effects model consisting of a basic (or fixed effects) model and a linking model that treats the fixed effects as realizations of a random variable. Under this model a model-assisted estimator of a small area mean is obtained. This estimator depends on the survey weights and remains design-consistent. A model-based estimator of its mean squared error (MSE) is also obtained. Simulation results suggest that the proposed estimator and Kott's (1989) model-assisted estimator are equally efficient, and that the proposed MSE estimator is often much more stable than Kott's MSE estimator, even under moderate deviations of the linking model. The method is also extended to nested error regression models.

    Release date: 1999-10-08

  • Articles and reports: 12-001-X19990014714
    Description:

    In this paper a general multilevel model framework is used to provide estimates for small areas using survey data. This class of models allows for variation between areas because of: (i) differences in the distributions of unit level variables between areas, (ii) differences in the distribution of area level variables between areas (iii) area specific components of variance which make provision for additional local variation which cannot be explained by unit-level or area-level covariates. Small area estimators are derived for this multilevel model formulation and an approximation to the mean square error (MSE) of each small area estimates for this general class of mixed models is provided together with an estimator of this MSE. Both the approximations to the MSE and the estimator of MSE take into account three sources of variation: (i) the prediction MSE assuming that both the fixed and components of variance terms in the multilevel model are knows, (ii) the additional component due to the fact that the fixed coefficients must be estimated, and (iii) the further component due to the fact that the components of variance in the model must be estimated. The proposed methods are estimated using a large data set as a basis for numerical investigation. The results confirm that the extra components of variance contained in multilevel models as well as small area covariates can improve small area estimates and that the MSE approximation and estimator are satisfactory.

    Release date: 1999-10-08

  • Articles and reports: 12-001-X19990014715
    Description:

    The Gallup Organization has been conducting household surveys to study state-wide prevalences of alcohol and drug (e.g., cocaine, marijuana, etc.) use. Traditional design-based survey estimates of use and dependence for counties and select demographic groups have unacceptably large standard errors because sample sizes in sub-state groups are two small. Synthetic estimation incorporates demographic information and social indicators in estimates of prevalence through an implicit regression model. Synthetic estimates tend to have smaller variances than design-based estimates, but can be very homogeneous across counties when auxiliary variables are homogeneous. Composite estimates for small areas are weighted averages of design-based survey estimates and synthetic estimates. A second problem generally not encountered at the state level but present for sub-state areas and groups concerns estimating standard errors of estimated prevalences that are close to zero. This difficulty affects not only telephone household survey estimates, but also composite estimates. A hierarchical model is proposed to address this problem. Empirical Bayes composite estimators, which incorporate survey weights, of prevalences and jackknife estimators of their mean squared errors are presented and illustrated.

    Release date: 1999-10-08
Reference (9)

Reference (9) ((9 results))

  • Surveys and statistical programs – Documentation: 92-371-X
    Description:

    This report deals with sampling and weighting, a process whereby certain characteristics are collected and processed for a random sample of dwellings and persons identified in the complete census enumeration. Data for the whole population are then obtained by scaling up the results for the sample to the full population level. The use of sampling may lead to substantial reductions in costs and respondent burden, or alternatively, can allow the scope of a census to be broadened at the same cost.

    Release date: 1999-12-07

  • Surveys and statistical programs – Documentation: 11-522-X19980015017
    Description:

    Longitudinal studies with repeated observations on individuals permit better characterizations of change and assessment of possible risk factors, but there has been little experience applying sophisticated models for longitudinal data to the complex survey setting. We present results from a comparison of different variance estimation methods for random effects models of change in cognitive function among older adults. The sample design is a stratified sample of people 65 and older, drawn as part of a community-based study designed to examine risk factors for dementia. The model summarizes the population heterogeneity in overall level and rate of change in cognitive function using random effects for intercept and slope. We discuss an unweighted regression including covariates for the stratification variables, a weighted regression, and bootstrapping; we also did preliminary work into using balanced repeated replication and jackknife repeated replication.

    Release date: 1999-10-22

  • Surveys and statistical programs – Documentation: 11-522-X19980015019
    Description:

    The British Labour Force Survey (LFS) is a quarterly household survey with a rotating sample design that can potentially be used to produce longitudinal data, including estimates of labour force gross flows. However, these estimates may be biased due to the effect of non-response. Weighting adjustments are a commonly used method to account for non-response bias. We find that weighting may not fully account for the effect of non-response bias because non-response may depend on the unobserved labour force flows, i.e., the non-response is non-ignorable. To adjust for the effects of non-ignorable non-response, we propose a model for the complex non-response patterns in the LFS which controls for the correlated within-household non-response behaviour found in the survey. The results of modelling suggest that non-response may be non-ignorable in the LFS, causing the weighting estimates to be biased.

    Release date: 1999-10-22

  • Surveys and statistical programs – Documentation: 11-522-X19980015022
    Description:

    This article extends and further develops the method proposed by Pfeffermann, Skinner and Humphreys (1998) for the estimation of gross flows in the presence of classification errors. The main feature of that method is the use of auxiliary information at the individual level which circumvents the need for validation data for estimating the misclassification rates. The new developments in this article are the establishment of conditions for model identification, a study of the properties of a model goodness of fit statistic and modifications to the sample likelihood to account for missing data and informative sampling. The new developments are illustrated by a small Monte-Carlo simulation study.

    Release date: 1999-10-22

  • Surveys and statistical programs – Documentation: 11-522-X19980015028
    Description:

    We address the problem of estimation for the income dynamics statistics calculated from complex longitudinal surveys. In addition, we compare two design-based estimators of longitudinal proportions and transition rates in terms of variability under large attrition rates. One estimator is based on the cross-sectional samples for the estimation of the income class boundaries at each time period and on the longitudinal sample for the estimation of the longitudinal counts; the other estimator is entirely based on the longitudinal sample, both for the estimation of the class boundaries and the longitudinal counts. We develop Taylor linearization-type variance estimators for both the longitudinal and the mixed estimator under the assumption of no change in the population, and for the mixed estimator when there is change.

    Release date: 1999-10-22

  • Surveys and statistical programs – Documentation: 11-522-X19980015030
    Description:

    Two-phase sampling designs have been conducted in waves to estimate the incidence of a rare disease such as dementia. Estimation of disease incidence from longitudinal dementia study has to appropriately adjust for data missing by death as well as the sampling design used at each study wave. In this paper we adopt a selection model approach to model the missing data by death and use a likelihood approach to derive incidence estimates. A modified EM algorithm is used to deal with data missing by sampling selection. The non-paramedic jackknife variance estimator is used to derive variance estimates for the model parameters and the incidence estimates. The proposed approaches are applied to data from the Indianapolis-Ibadan Dementia Study.

    Release date: 1999-10-22

  • Surveys and statistical programs – Documentation: 11-522-X19980015031
    Description:

    The U.S. Third National Health and Nutrition Examination Survey (NHANES III) was carried out from 1988 to 1994. This survey was intended primarily to provide estimates of cross-sectional parameters believed to be approximately constant over the six-year data collection period. However, for some variable (e.g., serum lead, body mass index and smoking behavior), substantive considerations suggest the possible presence of nontrivial changes in level between 1988 and 1994. For these variables, NHANES III is potentially a valuable source of time-change information, compared to other studies involving more restricted populations and samples. Exploration of possible change over time is complicated by two issues. First, there was of practical concern because some variables displayed substantial regional differences in level. This was of practical concern because some variables displayed substantial regional differences in level. Second, nontrivial changes in level over time can lead to nontrivial biases in some customary NHANES III variance estimators. This paper considers these two problems and discusses some related implications for statistical policy.

    Release date: 1999-10-22

  • Surveys and statistical programs – Documentation: 11-522-X19980015033
    Description:

    Victimizations are not randomly scattered through the population, but tend to be concentrated in relatively few victims. Data from the U.S. National Crime Victimization Survey (NCVS), a multistage rotating panel survey, are employed to estimate the conditional probabilities of being a crime victim at time t given the victimization status in earlier interviews. Models are presented and fit to allow use of partial information from households that move in or out of the housing unit during the study period. The estimated probability of being a crime victim at interview t given the status at interview (t-l) is found to decrease with t. Possible implications for estimating cross-sectional victimization rates are discusssed.

    Release date: 1999-10-22

  • Surveys and statistical programs – Documentation: 11-522-X19980015035
    Description:

    In a longitudinal survey conducted for k periods some units may be observed for less than k of the periods. Examples include, surveys designed with partially overlapping subsamples, a pure panel survey with nonresponse, and a panel survey supplemented with additional samples for some of the time periods. Estimators of the regression type are exhibited for such surveys. An application to special studies associated with the National Resources Inventory is discussed.

    Release date: 1999-10-22
Date modified: