Sort Help
entries

Results

All (107)

All (107) (0 to 10 of 107 results)

  • Articles and reports: 82-005-X20020016479
    Geography: Canada
    Description:

    The Population Health Model (POHEM) is a policy analysis tool that helps answer "what-if" questions about the health and economic burden of specific diseases and the cost-effectiveness of administering new diagnostic and therapeutic interventions. This simulation model is particularly pertinent in an era of fiscal restraint, when new therapies are generally expensive and difficult policy decisions are being made. More important, it provides a base for a broader framework to inform policy decisions using comprehensive disease data and risk factors. Our "base case" models comprehensively estimate the lifetime costs of treating breast, lung and colorectal cancer in Canada. Our cancer models have shown the large financial burden of diagnostic work-up and initial therapy, as well as the high costs of hospitalizing those dying of cancer. Our core cancer models (lung, breast and colorectal cancer) have been used to evaluate the impact of new practice patterns. We have used these models to evaluate new chemotherapy regimens as therapeutic options for advanced lung cancer; the health and financial impact of reducing the hospital length of stay for initial breast cancer surgery; and the potential impact of population-based screening for colorectal cancer. To date, the most interesting intervention we have studied has been the use of tamoxifen to prevent breast cancer among high risk women.

    Release date: 2002-10-08

  • Articles and reports: 11-522-X20010016227
    Description:

    The reputation of a national statistical office depends on the level of service it provides. Quality must be a core value and providing excellent service has to be embedded in the culture of a statistical organization.

    The paper outlines what is meant by a high quality statistical service. It explores factors that contribute to a quality work culture. In particular, it outlines the activities and experiences of the Australian Bureau of Statistics in maintaining a quality culture.

    Release date: 2002-09-12

  • Articles and reports: 11-522-X20010016228
    Description:

    The Current Population Survey is the primary source of labour force data for the United States. Throughout any survey process, it is critical that data quality be ensured. This paper discusses how quality issues are addressed during all steps of the survey process, including the development of the sample frame, sampling operations, sample control, data collection, editing, imputation, estimation, questionnaire development. It also reviews the quality evaluations that are built into the survey process. The paper concludes with a discussion of current research and possible future improvements to the survey.

    Release date: 2002-09-12

  • Articles and reports: 11-522-X20010016230
    Description:

    This publication consists of three papers, each addressing data quality issues associated with a large and complex survey. Two of the case studies involve household surveys of labour force activity and the third focuses on a business survey. The papers each address a data quality topic from a different perspective, but share some interesting common threads.

    Release date: 2002-09-12

  • Articles and reports: 11-522-X20010016231
    Description:

    This paper discusses in detail issues dealing with the technical aspects of designing and conducting surveys. Its is intended for an audience of survey methodologists.

    In 2000, the Behavioral Risk Factor Surveillance System (BRFSS) conducted monthly telephone surveys in 50 American states, the District of Columbia, and Puerto Rico: each was responsible for collecting its own survey data. In Maine, data collection was split between the state health department and ORC Macro, a commercial market research firm. Examination of survey outcome rates, selection biases and missing values for income suggest that the Maine health department data are more accurate. However, out of 18 behavioural health risk factors, only four are statistically different by data collector, and for these four factors, the data collected by ORC Macro seem more accurate.

    Release date: 2002-09-12

  • Articles and reports: 11-522-X20010016233
    Description:

    This paper discusses in detail issues dealing with the technical aspects of designing and conducting surveys. It is intended for an audience of survey methodologists.

    From January 2000, the data collection method of the Finnish Consumer Survey was changed from a Labour Force Survey panel design mode to an independent survey. All interviews are now carried out centrally from Statistics Finland's Computer Assisted Telephone Interview (CATI) Centre. There have been suggestions that the new survey mode has been influencing the respondents' answers. This paper analyses the extent of obvious changes in the results of the Finnish Consumer Survey. This is accomplished with the help of a pilot survey. Furthermore, this paper studies the interviewer's role in the data collection process. The analysis is based on cross-tabulations, chi-square tests and multinomial logit models. It shows that the new survey method produces more optimistic estimations and expectations concerning economic matters than the old method did.

    Release date: 2002-09-12

  • Articles and reports: 11-522-X20010016235
    Description:

    This paper discusses in detail issues dealing with the technical aspects of designing and conducting surveys. It is intended for an audience of survey methodologists.

    Police records collected by the Federal Bureau of Investigation (FBI) through the Uniform Crime Reporting (UCR) Program are the leading source of national crime statistics. Recently, audits to correct UCR records have raised concerns as to how to handle the errors discovered in these files. Concerns centre around the methodology used to detect errors and the procedures used to correct errors once they have been discovered. This paper explores these concerns, focusing on sampling methodology, establishment of a statistical-adjustment factor, and alternative solutions. The paper distinguishes the difference between sample adjustment and sample estimates of an agency's data, and recommends sample adjustment as the most accurate way of dealing with errors.

    Release date: 2002-09-12

  • Articles and reports: 11-522-X20010016236
    Description:

    This paper discusses in detail issues dealing with the technical aspects of designing and conducting surveys. It is intended for an audience of survey methodologists.

    The Uniform Crime Reporting (UCR) Program has devoted a considerable amount of resources in a continuous effort to improve the quality of its data. In this paper, the authors introduce and discuss the use of the cross-ratios and chi-square measures to evaluate the rationality of the data. The UCR data is used to empirically illustrate this approach.

    Release date: 2002-09-12

  • Articles and reports: 11-522-X20010016237
    Description:

    This paper discusses in detail issues dealing with the technical aspects of designing and conducting surveys. It is intended for an audience of survey methodologists.

    Secondary users of health information often assume that administrative data provides a relatively sound basis for making important planning and policy decisions. If errors are evenly or randomly distributed, this assumption may have little impact on these decisions. However, when information sources contain systematic errors, or when systematic errors are introduced during the creation of master files, this assumption can be damaging.

    The most common systematic errors involve underreporting activities for a specific population; inaccurate re-coding of spatial information; and differences in data entry protocols, which have raised questions about the consistency of data submitted by different tracking agencies. The Central East Health Information Partnership (CEHIP) has identified a number of systematic errors in administrative databases and has documented many of these in reports distributed to partner organizations.

    This paper describes how some of these errors were identified and notes the processes that give rise to the loss of data integrity. The conclusion addresses some of the impacts these problems have for health planners, program managers and policy makers.

    Release date: 2002-09-12

  • Articles and reports: 11-522-X20010016238
    Description:

    This paper discusses in detail issues dealing with the technical aspects of designing and conducting surveys. It is intended for an audience of survey methodologists.

    Research programs building on population-based, longitudinal administrative data and record-linkage techniques are found in England, Scotland, the United States (the Mayo Clinic), Western Australia and Canada. These systems can markedly expand both the methodological and the substantive research in health and health care.

    This paper summarizes published, Canadian data quality studies regarding registries, hospital discharges, prescription drugs, and physician claims. It makes suggestions for improving registries, facilitating record linkage and expanding research into social epidemiology. New trends in case identification and health status measurement using administrative data have also been noted. And the differing needs for data quality research in each province have been highlighted.

    Release date: 2002-09-12
Stats in brief (1)

Stats in brief (1) ((1 result))

  • Stats in brief: 13-604-M2002039
    Description:

    The latest annual results for the US/Canada purchasing power parities (PPPs) and real expenditures per head in the US compared with Canada are published in this paper. The data were developed for the period 1992 to 2001, using the latest US and Canada expenditure data from the National Accounts and price comparisons for 1999. The paper contains summaries of differences between the results of the multilateral (OECD) study and the Statistics Canada bilateral study. Some differences in classifications have been incorporated, as well as normal national Accounts revisions. Ten tables are presented in an Appendix for 21 categories of expenditure for the GDP.

    Release date: 2002-06-28
Articles and reports (105)

Articles and reports (105) (80 to 90 of 105 results)

  • Articles and reports: 12-001-X20020016413
    Description:

    Leslie Kish long advocated a "rolling sample" design, with non-overlapping monthly panels which can be cumulated over different lengths of time for domains of different sizes. This enables a single survey to serve multiple purposes. The Census Bureau's new American Community Survey (ACS) uses such a rolling sample design, with annual averages to measure change at the state level, and three-year or five-year moving averages to describe progressively smaller domains. This paper traces Kish's influence on the development of the American Community Survey, and discusses some practical methodological issues that had to be addressed in implementing the design.

    Release date: 2002-07-05

  • Articles and reports: 12-001-X20020016414
    Description:

    Census-taking by traditional methods is becoming more difficult. The possibility of cross-linking administrative files provides an attractive alternative to conducting periodic censuses (Laihonen 2000; Borchsenius 2000). This was proposed in a recent article by Nathan (2001). The Institut national de la statistique et des études économiques (INSEE)' redesign is based on the idea of a 'continuous census,' originally suggested by Kish (1981, 1990) and Horvitz (1986). A first approach that would be feasible in France can be found in Deville and Jacod (1996). This article reviews methodological developments since INSEE started its population census redesign program.

    Release date: 2002-07-05

  • Articles and reports: 12-001-X20020016417
    Description:

    An approach to exploiting the data from multiple surveys and epochs by benchmarking the parameter estimates of logit models of binary choice and semiparametric survival models has been developed. The goal is to exploit the relatively rich source of socio-economic covariates offered by Statistics Canada's Survey of Labour and Income Dynamics (SLID), and also the historical time-span of the Labour Force Survey (LFS), enhanced by following individuals through each interview in their six-month rotation. A demonstration of how the method can be applied is given, using the maternity leave module of the LifePaths dynamic microsimulation project at Statistics Canada. The choice of maternity leave over job separation is specified as a binary logit model, while the duration of leave is specified as a semiparametric proportional hazards survival model with covariates together with a baseline hazard permitted to change each month. Both models are initially estimated by maximum likelihood from pooled SLID data on maternity leaves beginning in the period from 1993 to 1996, then benchmarked to annual estimates from the LFS from 1976 to 1992. In the case of the logit model, the linear predictor is adjusted by a log-odds estimate from the LFS. For the survival model, a Kaplan-Meier estimator of the hazard function from the LFS is used to adjust the predicted hazard in the semiparametric model.

    Release date: 2002-07-05

  • Articles and reports: 12-001-X20020016419
    Description:

    Since some individuals in a population may lack telephones, telephone surveys using random digit dialling within strata may result in asymptotically biased estimators of ratios. The impact from not being able to sample the non-telephone population is examined. We take into account the propensity that a household owns a telephone, when proposing a post-stratified telephone-weighted estimator, which seems to perform better than the typical post-stratified estimator in terms of mean squared error. Such coverage propensities are estimated using the Public Use Microdata Samples, as provided by the United States Census. Non-post-stratified estimators are considered when sample sizes are small. The asymptotic mean squared error, along with its estimate based on a sample of each of the estimators is derived. Real examples are analysed using the Public Use Microdata Samples. Other forms of no-nresponse are not examined herein.

    Release date: 2002-07-05

  • Articles and reports: 12-001-X20020016420
    Description:

    The post-stratified estimator sometimes has empty strata. To address this problem, we construct a post-stratified estimator with post-strata sizes set in the sample. The post-strata sizes are then random in the population. The next step is to construct a smoothed estimator by calculating a moving average of the post-stratified estimators. Using this technique, it is possible to construct an exact theory of calibration on distribution. The estimator obtained is not only calibrated on distribution, it is also linear and completely unbiased. We then compare the calibrated estimator with the regression estimator. Lastly, we propose an approximate variance estimator that we validate using simulations.

    Release date: 2002-07-05

  • Articles and reports: 12-001-X20020016421
    Description:

    Like most other surveys, non-response often occurs in the Current Employment Survey conducted monthly by the U.S. Bureau of Labor Statistics (BLS). In a given month, imputation using reported data from previous months generally provides more efficient survey estimators than ignoring non-respondents and adjusting survey weights. However, imputation also has an effect on variance estimation: treating imputed values as reported data and applying a standard variance estimation method lead to negatively biased variance estimators. In this article, we propose some variance estimators using the Grouped Balanced Half Sample method and re-imputation to take imputation into account. Some simulation results for the finite sample performance of the imputed survey estimators and their variance estimators are presented.

    Release date: 2002-07-05

  • Articles and reports: 12-001-X20020016422
    Description:

    In estimating variances so as to account for imputation for item non-response, Rao and Shao (1992) originated an approach based on adjusted replication. Further developments (particularly the extension to Balanced Repeated Replication of the jackknife replication of Rao and Shao) were made by Shao, Chen and Chen (1998). In this article, we explore how these methods can be implemented using replicate weights.

    Release date: 2002-07-05

  • Articles and reports: 12-001-X20020016424
    Description:

    A variety of estimators for the variance of the General Regression (GREG) estimator of a mean have been proposed in the sampling literature, mainly with the goal of estimating the design-based variance. Under certain conditions, estimators can be easily constructed that are approximately unbiased for both the design-variance and the model-variance. Several dual-purpose estimators are studied here in single-stage sampling. These choices are robust estimators of a model-variance even if the model that motivates the GREG has an incorrect variance parameter.

    A key feature of the robust estimators is the adjustment of squared residuals by factors analogous to the leverages used in standard regression analysis. We also show that the delete-one jackknife estimator implicitly includes the leverage adjustments and is a good choice from either the design-based or model-based perspective. In a set of simulations, these variance estimators have small bias and produce confidence intervals with near-nominal coverage rates for several sampling methods, sample sizes and populations in single-stage sampling.

    We also present simulation results for a skewed population where all variance estimators perform poorly. Samples that do not adequately represent the units with large values lead to estimated means that are too small, variance estimates that are too small and confidence intervals that cover at far less than the nominal rate. These defects can be avoided at the design stage by selecting samples that cover the extreme units well. However, in populations with inadequate design information this will not be feasible.

    Release date: 2002-07-05

  • Articles and reports: 12-001-X20020016488
    Description:

    Sampling is a branch of and a tool for statistics, and the field of statistics was founded as a new paradigm in 1810 by Quetelet (Porter 1987; Stigler 1986). Statistics and statisticians deal with the effects of chance events on empirical data. The mathematics of chance had been developed centuries earlier to predict gambling games and to account for errors of observation in astronomy. Data were also compiled for commerce, banking, and government purposes. But combining chance with real data required a new theoretical view; a new paradigm. Thus, statistical science and its various branches, which are the products of the maturity of human development (Kish 1985), arrived late in history and academia. This article examines the new concepts in diverse aspects of sampling, which may also be known as new sampling paradigms, models or methods.

    Release date: 2002-07-05

  • Articles and reports: 12-001-X20020019499
    Description:

    "In this Issue" is a column where the Editor briefly presents each paper of the current issue of Survey Methodology. As well, it sometimes contains informations on structure or management changes in the journal.

    Release date: 2002-07-05
Journals and periodicals (1)

Journals and periodicals (1) ((1 result))

  • Journals and periodicals: 85F0036X
    Geography: Canada
    Description:

    This study documents the methodological and technical challenges that are involved in performing analysis on small groups using a sample survey, oversampling, response rate, non-response rate due to language, release feasibility and sampling variability. It is based on the 1999 General Social Survey (GSS) on victimization.

    Release date: 2002-05-14
Date modified: