Keyword search

Sort Help
entries

Results

All (122)

All (122) (50 to 60 of 122 results)

  • Articles and reports: 11-522-X20010016274
    Description:

    This paper discusses in detail issues dealing with the technical aspects of designing and conducting surveys. It is intended for an audience of survey methodologists.

    Since the late 1950s, the probability surveys in the manufacturing sector within the Manufacturing and Construction Division (MCD) had been almost exclusively selected by using Poisson sampling with unit probabilities assigned proportionate to some measure of size. Poisson sampling has the advantage of simplistic variance calculations. Its disadvantage is that the sample size is a random variable, thus adding an additional (and usually positive) component of variance to the survey estimates. In the 1998 survey year, MCD initiated the use of the modified Tillé sampling procedure in some of its surveys. This sampling procedure is used when there is unequal probability of selection and the sample size is fixed. This paper briefly describes this modified procedure and some of its features, and for a variety of dissimilar surveys, itcontrasts variance results obtained using the Tillé procedure to those resulting from the earlier Poisson procedure.

    Release date: 2002-09-12

  • Articles and reports: 11-522-X20010016275
    Description:

    This paper discusses in detail issues dealing with the technical aspects of designing and conducting surveys. It is intended for an audience of survey methodologists.

    Hot deck imputation, in which missing items are replaced with values from respondents, is often used in survey sampling. A model supporting such procedures is the model in which response probabilities are assumed equal within imputation cells. In this paper, an efficient version of hot deck imputation is described, as are the variance of the efficient version derived under the cell response model and an approximation to the fully efficient procedure in which a small number of values are imputed for each non-respondent, respectively. Variance estimation procedures are presented and illustrated in a Monte Carlo study.

    Release date: 2002-09-12

  • Articles and reports: 11-522-X20010016276
    Description:

    In surveys where interviewers need a high degree of specialist knowledge and training, one is often forced to make do with a small number of highly trained people, each having a high case load. It is well known that this can lead to interviewer variability having a relatively large impact on the total error, particularly for estimates of simple quantities such as means and proportions. In a previous paper (Davis and Scott, 1995) the impact for continuous responses was looked at using a linear components of variance model. However, most responses in health questionnaires are binary and it is known that this approach results in underestimating the intra-cluster and intra-interviewer correlations for binary responses. In this paper,a multi-level binary model is used to explore the impact of interviewer variability on estimated proportions.

    Release date: 2002-09-12

  • Articles and reports: 11-522-X20010016277
    Description:

    This paper discusses in detail issues dealing with the technical aspects of designing and conducting surveys. It is intended for an audience of survey methodologists.

    The advent of computerized record-linkage methodology has facilitated the conduct of cohort mortality studies in which exposure data in one database are electronically linked with mortality data from another database. In this article, the impact of linkage errors on estimates of epidemiological indicators of risk, such as standardized mortality ratios and relative risk regression model parameters, is explored. It is shown that these indicators can be subject to bias and additional variability in the presence of linkage errors, with false links and non-links leading to positive and negative bias, respectively, in estimates of the standardized mortality ratio. Although linkage errors always increase the uncertainty in the estimates, bias can be effectively eliminated in the special case in which the false positive rate equals the false negative rate within homogeneous states defined by cross-classification of the covariates of interest.

    Release date: 2002-09-12

  • Articles and reports: 11-522-X20010016278
    Description:

    This paper discusses in detail issues dealing with the technical aspects of designing and conducting surveys. It is intended for an audience of survey methodologists.

    The first round on quality reporting of the statistics produced in Eurostat has almost been completed. This paper presents the experiences so far and, in particular, some of the methodological problems encountered when measuring the quality of the statistics that are produced for international comparisons. A proposal is also presented for indicators that summarize the detailed information provided in these quality reports. Two sets of indicator are discussed: the first more producer-oriented, the second more user-oriented.

    Release date: 2002-09-12

  • Articles and reports: 11-522-X20010016279
    Description:

    This paper discusses in detail issues dealing with the technical aspects of designing and conducting surveys. It is intended for an audience of survey methodologists.

    Rather than having to rely on traditional measures of survey quality, such as response rates, the Social Survey Division of the U.K. Office for National Statistics has been looking for alternative ways to report on quality. In order to achieve this, all the processes involved throughout the lifetime of a survey, from sampling and questionnaire design through to production of the finished report, have been mapped out. Having done this, we have been able to find quality indicators for many of these processes. By using this approach, we hope to be able to appraise any changes to our processes as well as to inform our customers of the quality of the work we carry out.

    Release date: 2002-09-12

  • Articles and reports: 11-522-X20010016280
    Description:

    This paper discusses in detail issues dealing with the technical aspects of designing and conducting surveys. It is intended for an audience of survey methodologists.

    Survey response rates serve as one key measure of the quality of a data set. However, they are only useful to a statistical agency in the evaluation of ongoing data collections if they are based on a predefined set of formulas and definitions that are uniformly applied across all data collections.

    In anticipation of a revision of the current National Center for Education Statistics (NCES) statistical standards, several agency-wide audits of statistical practices were undertaken in the late 1990s. In particular, a compendium documenting major survey design parameters of NCES surveys was drafted. Related to this, NCES conducted a targeted audit of the consistency in response rate calculations across these surveys.

    Although NCES has had written statistical standards since 1988, the audit of the reported response rates from 50 survey components in 14 NCES surveys revealed considerable variability in procedures used to calculate response rates. During the course of the response rate audit, the Statistical Standards Program staff concluded that the organization of the 1992 Standards made it difficult to find all of the information associated with response rates in the standards. In fact, there are references to response rate in a number of separate standards scattered throughout the 1992 Statistical Standards.

    Release date: 2002-09-12

  • Articles and reports: 11-522-X20010016281
    Description:

    This paper discusses in detail issues dealing with the technical aspects of designing and conducting surveys. It is intended for an audience of survey methodologists.

    Methodology for estimating the sampling error of the non-seasonally adjusted estimate of level of the Index of Production (IoP) has previously been developed using Taylor linearization and parametric bootstrap methods, with both producing comparable results. From the study, it was considered that the parametric bootstrap approach would be more practical to implement. This paper describes the methodology that is being developed to estimate the sampling error of the non-seasonally adjusted IoP change using the parametric bootstrap method, along with the data that are needed from the contributing surveys, the assumptions made, and the practical problems encountered during development.

    Release date: 2002-09-12

  • Articles and reports: 11-522-X20010016282
    Description:

    This paper discusses in detail issues dealing with the technical aspects of designing and conducting surveys. It is intended for an audience of survey methodologists.

    The Discharge Abstract Database (DAD) is one of the key data holdings held by the Canadian Institute for Health Information (CIHI). The institute is a national, not-for-profit organization, which plays a critical role in the development of Canada's health information system. The DAD contains acute care discharge data from most Canadian hospitals. The data generated are essential for determining, for example, the number and types of procedures and the length of hospital stays. CIHI is conducting the first national data quality study of selected clinical and administrative data from the DAD. This study is evaluating and measuring the accuracy of the DAD by returning to the original data sources and comparing this information with what exists in the CIHI database, in order to identify any discrepancies and their associated reasons. This paper describes the DAD data quality study and some preliminary findings. The findings are also briefly compared with another similar study. In conclusion, the paper discusses subsequent steps for the study and how the findings from the first year are contributing to improvements in the quality of the DAD.

    Release date: 2002-09-12

  • Articles and reports: 11-522-X20010016283
    Description:

    This paper discusses in detail issues dealing with the technical aspects of designing and conducting surveys. It is intended for an audience of survey methodologists.

    The accurate recording of patients' Indegenous status in hospital separations data is critical to analyses of health service use by Aboriginal and Torres Strait Islander Australians, who have relatively poor health. However, the accuracy of these data is now well understood. In 1998, a methodology for assessing the data accuracy was piloted in 11 public hospitals. Data were collected for 8,267 patients using a personal interview, and compared with the corresponding, routinely collected data. Among the 11 hospitals, the proportion of patients correctly recorded as Indigenous ranged from 55 % to 100 %. Overall, hospitals with high proportions of Indigenous persons in their catchment areas reported more accurate data. The methodology has since been used to assess data quality in hospitals in two Australian states and to promote best practice data collection.

    Release date: 2002-09-12
Data (0)

Data (0) (0 results)

No content available at this time.

Analysis (107)

Analysis (107) (0 to 10 of 107 results)

  • Articles and reports: 82-005-X20020016479
    Geography: Canada
    Description:

    The Population Health Model (POHEM) is a policy analysis tool that helps answer "what-if" questions about the health and economic burden of specific diseases and the cost-effectiveness of administering new diagnostic and therapeutic interventions. This simulation model is particularly pertinent in an era of fiscal restraint, when new therapies are generally expensive and difficult policy decisions are being made. More important, it provides a base for a broader framework to inform policy decisions using comprehensive disease data and risk factors. Our "base case" models comprehensively estimate the lifetime costs of treating breast, lung and colorectal cancer in Canada. Our cancer models have shown the large financial burden of diagnostic work-up and initial therapy, as well as the high costs of hospitalizing those dying of cancer. Our core cancer models (lung, breast and colorectal cancer) have been used to evaluate the impact of new practice patterns. We have used these models to evaluate new chemotherapy regimens as therapeutic options for advanced lung cancer; the health and financial impact of reducing the hospital length of stay for initial breast cancer surgery; and the potential impact of population-based screening for colorectal cancer. To date, the most interesting intervention we have studied has been the use of tamoxifen to prevent breast cancer among high risk women.

    Release date: 2002-10-08

  • Articles and reports: 11-522-X20010016227
    Description:

    The reputation of a national statistical office depends on the level of service it provides. Quality must be a core value and providing excellent service has to be embedded in the culture of a statistical organization.

    The paper outlines what is meant by a high quality statistical service. It explores factors that contribute to a quality work culture. In particular, it outlines the activities and experiences of the Australian Bureau of Statistics in maintaining a quality culture.

    Release date: 2002-09-12

  • Articles and reports: 11-522-X20010016228
    Description:

    The Current Population Survey is the primary source of labour force data for the United States. Throughout any survey process, it is critical that data quality be ensured. This paper discusses how quality issues are addressed during all steps of the survey process, including the development of the sample frame, sampling operations, sample control, data collection, editing, imputation, estimation, questionnaire development. It also reviews the quality evaluations that are built into the survey process. The paper concludes with a discussion of current research and possible future improvements to the survey.

    Release date: 2002-09-12

  • Articles and reports: 11-522-X20010016230
    Description:

    This publication consists of three papers, each addressing data quality issues associated with a large and complex survey. Two of the case studies involve household surveys of labour force activity and the third focuses on a business survey. The papers each address a data quality topic from a different perspective, but share some interesting common threads.

    Release date: 2002-09-12

  • Articles and reports: 11-522-X20010016231
    Description:

    This paper discusses in detail issues dealing with the technical aspects of designing and conducting surveys. Its is intended for an audience of survey methodologists.

    In 2000, the Behavioral Risk Factor Surveillance System (BRFSS) conducted monthly telephone surveys in 50 American states, the District of Columbia, and Puerto Rico: each was responsible for collecting its own survey data. In Maine, data collection was split between the state health department and ORC Macro, a commercial market research firm. Examination of survey outcome rates, selection biases and missing values for income suggest that the Maine health department data are more accurate. However, out of 18 behavioural health risk factors, only four are statistically different by data collector, and for these four factors, the data collected by ORC Macro seem more accurate.

    Release date: 2002-09-12

  • Articles and reports: 11-522-X20010016233
    Description:

    This paper discusses in detail issues dealing with the technical aspects of designing and conducting surveys. It is intended for an audience of survey methodologists.

    From January 2000, the data collection method of the Finnish Consumer Survey was changed from a Labour Force Survey panel design mode to an independent survey. All interviews are now carried out centrally from Statistics Finland's Computer Assisted Telephone Interview (CATI) Centre. There have been suggestions that the new survey mode has been influencing the respondents' answers. This paper analyses the extent of obvious changes in the results of the Finnish Consumer Survey. This is accomplished with the help of a pilot survey. Furthermore, this paper studies the interviewer's role in the data collection process. The analysis is based on cross-tabulations, chi-square tests and multinomial logit models. It shows that the new survey method produces more optimistic estimations and expectations concerning economic matters than the old method did.

    Release date: 2002-09-12

  • Articles and reports: 11-522-X20010016235
    Description:

    This paper discusses in detail issues dealing with the technical aspects of designing and conducting surveys. It is intended for an audience of survey methodologists.

    Police records collected by the Federal Bureau of Investigation (FBI) through the Uniform Crime Reporting (UCR) Program are the leading source of national crime statistics. Recently, audits to correct UCR records have raised concerns as to how to handle the errors discovered in these files. Concerns centre around the methodology used to detect errors and the procedures used to correct errors once they have been discovered. This paper explores these concerns, focusing on sampling methodology, establishment of a statistical-adjustment factor, and alternative solutions. The paper distinguishes the difference between sample adjustment and sample estimates of an agency's data, and recommends sample adjustment as the most accurate way of dealing with errors.

    Release date: 2002-09-12

  • Articles and reports: 11-522-X20010016236
    Description:

    This paper discusses in detail issues dealing with the technical aspects of designing and conducting surveys. It is intended for an audience of survey methodologists.

    The Uniform Crime Reporting (UCR) Program has devoted a considerable amount of resources in a continuous effort to improve the quality of its data. In this paper, the authors introduce and discuss the use of the cross-ratios and chi-square measures to evaluate the rationality of the data. The UCR data is used to empirically illustrate this approach.

    Release date: 2002-09-12

  • Articles and reports: 11-522-X20010016237
    Description:

    This paper discusses in detail issues dealing with the technical aspects of designing and conducting surveys. It is intended for an audience of survey methodologists.

    Secondary users of health information often assume that administrative data provides a relatively sound basis for making important planning and policy decisions. If errors are evenly or randomly distributed, this assumption may have little impact on these decisions. However, when information sources contain systematic errors, or when systematic errors are introduced during the creation of master files, this assumption can be damaging.

    The most common systematic errors involve underreporting activities for a specific population; inaccurate re-coding of spatial information; and differences in data entry protocols, which have raised questions about the consistency of data submitted by different tracking agencies. The Central East Health Information Partnership (CEHIP) has identified a number of systematic errors in administrative databases and has documented many of these in reports distributed to partner organizations.

    This paper describes how some of these errors were identified and notes the processes that give rise to the loss of data integrity. The conclusion addresses some of the impacts these problems have for health planners, program managers and policy makers.

    Release date: 2002-09-12

  • Articles and reports: 11-522-X20010016238
    Description:

    This paper discusses in detail issues dealing with the technical aspects of designing and conducting surveys. It is intended for an audience of survey methodologists.

    Research programs building on population-based, longitudinal administrative data and record-linkage techniques are found in England, Scotland, the United States (the Mayo Clinic), Western Australia and Canada. These systems can markedly expand both the methodological and the substantive research in health and health care.

    This paper summarizes published, Canadian data quality studies regarding registries, hospital discharges, prescription drugs, and physician claims. It makes suggestions for improving registries, facilitating record linkage and expanding research into social epidemiology. New trends in case identification and health status measurement using administrative data have also been noted. And the differing needs for data quality research in each province have been highlighted.

    Release date: 2002-09-12
Reference (15)

Reference (15) (0 to 10 of 15 results)

  • Surveys and statistical programs – Documentation: 62F0026M2002002
    Geography: Province or territory
    Description:

    This guide presents information of interest to users of data from the Survey of Household Spending. Data are collected via paper questionnaires and personal interviews conducted in January, February and March after the reference year. Information is gathered about the spending habits, dwelling characteristics and household equipment of Canadian households during the reference year. The survey covers private households in the 10 provinces and the 3 territories. (The territories are surveyed every second year, starting in 2001.) This guide includes definitions of survey terms and variables, as well as descriptions of survey methodology and data quality. There is also a section describing the various statistics that can be created using expenditure data (e.g., budget share, market share and aggregates).

    Release date: 2002-12-11

  • Surveys and statistical programs – Documentation: 75F0002M2002002
    Description:

    This document outlines the structure of the January 2001 Survey of Labour and Income Dynamics (SLID) labour interview, including question wording, possible responses and the flow of questions.

    Release date: 2002-12-04

  • Surveys and statistical programs – Documentation: 75F0002M2002003
    Description:

    This paper presents the questions, possible responses and question flows for the 2001 Survey of Labour and Income Dynamics (SLID) preliminary questionnaire.

    Release date: 2002-12-04

  • Surveys and statistical programs – Documentation: 75F0002M2002004
    Description:

    This document presents the information for the Entry Exit portion of the Survey of Labour and Income Dynamics (SLID) Labour interview.

    Release date: 2002-12-04

  • Surveys and statistical programs – Documentation: 11-522-X20010016225
    Description:

    The European Union Labour Forces Survey (LFS) is based on national surveys that were originally very different. For the past decade, under pressure from increasingly demanding users (particularly with respect to timeliness, comparability and flexibility), the LFS has been subjected to a constant process of quality improvement.

    The following topics are presented in this paper:A. the quality improvement process, which comprises screening national survey methods, target structure, legal foundations, quality reports, more accurate and more explicit definitions of components, etc.;B. expected or achieved results, which include an ongoing survey producing quarterly results within reasonable time frames, comparable employment and unemployment rates over time and space in more than 25 countries, specific information on current political topics, etc.;C. continuing shortcomings, such as implementation delays in certain countries, possibilities of longitudinal analysis, public access to microdata, etc.; D. future tasks envisioned, such as adaptation of the list of ISCO and ISCED variables and nomenclatures (to take into account evolution in employment and teaching methods), differential treatment of structural variables and increased recourse to administrative files (to limit respondent burden), harmonization of questionnaires, etc.

    Release date: 2002-09-12

  • Surveys and statistical programs – Documentation: 11-522-X20010016229
    Description:

    This paper discusses the approach that Statistics Canada has taken to improve the quality of annual business surveys through their integration in the Unified Enterprise Survey (UES). The primary objective of the UES is to measure the final annual sales of goods and services accurately by province, in sufficient detail and in a timely manner.

    This paper describes the methodological approaches that the UES has used to improve financial and commodity data quality in four broad areas. These include improved coherence of the data collected from different levels of the enterprise, better coverage of industries, better depth of information (in the sense of more content detail and estimates for more detailed domains) and better consistency of the concepts and methods across industries.

    The approach, in achieving quality, has been to (a) establish a base measure of the quality of the business survey program prior to the UES, (b) measure the annual data quality of the UES, and (c) carry out specific studies to better understand the quality of UES data and methods.

    Release date: 2002-09-12

  • Surveys and statistical programs – Documentation: 11-522-X20010016234
    Description:

    This paper discusses in detail issues dealing with the technical aspects of designing and conducting surveys. It is intended for an audience of survey methodologists.

    With the goal of obtaining a complete enumeration of the Canadian agricultural sector, the 2001 Census of Agriculture has been conducted using several collection methods. Challenges to the traditional drop-off and mail-back of paper questionnaires in a household-based enumeration have led to the adoption of supplemental methods using newer technologies to maintain the coverage and content of the census. Overall, this mixed-mode data collection process responds to the critical needs of the census programme at various points. This paper examines these data collection methods, several quality assessments, and the future challenges of obtaining a co-ordinated view of the methods' individual approaches to achieving data quality.

    Release date: 2002-09-12

  • Surveys and statistical programs – Documentation: 11-522-X20010016269
    Description:

    This paper discusses in detail issues dealing with the technical aspects of designing and conducting surveys. It is intended for an audience of survey methodologists.

    In surveys with low response rates, non-response bias can be a major concern. While it is not always possible to measure the actual bias due to non-response, there are different approaches that help identify potential sources of non-response bias. In the National Center for Education Statistics (NCES), surveys with a response rate lower than 70% must conduct a non-response bias analysis. This paper discusses the different approaches to non-response bias analyses using examples from NCES.

    Release date: 2002-09-12

  • Surveys and statistical programs – Documentation: 11-522-X20010016293
    Description:

    This paper discusses in detail issues dealing with the technical aspects of designing and conducting surveys. It is intended for an audience of survey methodologists.

    This paper presents the Second Summit of the Americas Regional Education Indicators Project (PRIE), whose basic goal is to develop a set of comparable indicators for the Americas. This project is led by the Ministry of Education of Chile and has been developed in response to the countries' needs to improve their information systems and statistics. The countries need to construct reliable and relevant indicators to support decisions in education, both within their individual countries and the region as a whole. The first part of the paper analyses the importance of statistics and indicators in supporting educational policies and programs, and describes the present state of the information and statistics systems in these countries. It also discusses the major problems faced by the countries and reviews the countries' experiences in participating in other education indicators' projects or programs, such as the INES Program, WEI Project, MERCOSUR and CREMIS. The second part of the paper examines PRIE's technical co-operation program, its purpose and implementation. The second part also emphasizes how technical co-operation responds to the needs of the countries, and supports them in filling in the gaps in available and reliable data.

    Release date: 2002-09-12

  • Surveys and statistical programs – Documentation: 11-522-X20010016308
    Description:

    This paper discusses in detail issues dealing with the technical aspects of designing and conducting surveys. It is intended for an audience of survey methodologists.

    The Census Bureau uses response error analysis to evaluate the effectiveness of survey questions. For a given survey, questions that are deemed critical to the survey or considered problematic from past examination are selected for analysis. New or revised questions are prime candidates for re-interview. Re-interview is a new interview where a subset of questions from the original interview are re-asked to a sample of the survey respondents. For each re-interview question, the proportion of respondents who give inconsistent responses is evaluated. The "Index of Inconsistency" is used as the measure of response variance. Each question is labelled low, moderate, or high in response variance. In high response variance cases, the questions are put through cognitive testing, and modifications to the question are recommended.

    The Schools and Staffing Survey (SASS) sponsored by The National Center for Education Statistics (NCES), is also investigated for response error analysis and the possible relationships between inconsistent responses and characteristics of the schools and teachers in that survey. Results of this analysis can be used to change survey procedures and improve data quality.

    Release date: 2002-09-12
Date modified: