Keyword search
Filter results by
Search HelpKeyword(s)
Subject
- Selected: Statistical methods (69)
- Administrative data (2)
- Collection and questionnaires (6)
- Data analysis (1)
- Disclosure control and data dissemination (1)
- Editing and imputation (2)
- Frames and coverage (1)
- History and context (4)
- Inference and foundations (6)
- Quality assurance (26)
- Response and nonresponse (2)
- Statistical techniques (3)
- Survey design (3)
- Time series (4)
- Weighting and estimation (21)
- Other content related to Statistical methods (2)
Type
Survey or statistical program
Results
All (69)
All (69) (30 to 40 of 69 results)
- Articles and reports: 11-522-X19990015646Geography: CanadaDescription:
The current economic context obliges all partners of health-care systems, whether public or private, to identify those factors that determine the use of health-care services. To increase our understanding of the phenomena that underlie these relationships, Statistics Canada and the Manitoba Centre for Health Policy and Evaluation have established a new database. For a representative sample of the province of Manitoba, cross-sectional micro-data on the level of health of individuals and on their socioeconomic characteristics, and detailed longitudinal data on the use of health-care services have been linked. In this presentation, we will discuss the general context of the linkage of records from various organizations, the protection of privacy and confidentiality. We will also present results of studies which should not have been performed in the absence of the linked database.
Release date: 2000-03-02 - Surveys and statistical programs – Documentation: 11-522-X19990015648Description:
We estimate the parameters of a stochastic model for labour force careers involving distributions of correlated durations employed, unemployed (with and without job search) and not in the labour force. If the model is to account for sub-annual labour force patterns as well as advancement towards retirement, then no single data source is adequate to inform it. However, it is possible to build up an approximation from a number of different sources.
Release date: 2000-03-02 - Surveys and statistical programs – Documentation: 11-522-X19990015650Description:
The U.S. Manufacturing Plant Ownership Change Database (OCD) was constructed using plant-level data taken from the Census Bureau's Longitudinal Research Database (LRD). It contains data on all manufacturing plants that have experienced ownership change at least once during the period 1963-92. This paper reports the status of the OCD and discuss its research possibilities. For an empirical demonstration, data taken from the database are used to study the effects of ownership changes on plant closure.
Release date: 2000-03-02 - 34. Creation of an occupational surveillance system in Canada: Combining data for a unique Canadian study ArchivedSurveys and statistical programs – Documentation: 11-522-X19990015652Description:
Objective: To create an occupational surveillance system by collecting, linking, evaluating and disseminating data relating to occupation and mortality with the ultimate aim of reducing or preventing excess risk among workers and the general population.
Release date: 2000-03-02 - Articles and reports: 11-522-X19990015654Description:
A meta analysis was performed to estimate the proportion of liver carcinogens, the proportion of chemicals carcinogenic at any site, and the corresponding proportion of anticarcinogens among chemicals tested in 397 long-term cancer bioassays conducted by the U.S. National Toxicology Program. Although the estimator used was negatively biased, the study provided persuasive evidence for a larger proportion of liver carcinogens (0.43,90%CI: 0.35,0.51) than was identified by the NTP (0.28). A larger proportion of chemicals carcinogenic at any site was also estimated (0.59,90%CI: 0.49,0.69) than was identified by the NTP (0.51), although this excess was not statistically significant. A larger proportion of anticarcinogens (0.66) was estimated than carcinogens (0.59). Despite the negative bias, it was estimated that 85% of the chemicals were either carcinogenic or anticarcinogenic at some site in some sex-species group. This suggests that most chemicals tested at high enough doses will cause some sort of perturbation in tumor rates.
Release date: 2000-03-02 - 36. Particulate matter and daily mortality: Combining time series information from eight U.S. cities ArchivedSurveys and statistical programs – Documentation: 11-522-X19990015656Description:
Time series studies have shown associations between air pollution concentrations and morbidity and mortality. These studies have largely been conducted within single cities, and with varying methods. Critics of these studies have questioned the validity of the data sets used and the statistical techniques applied to them; the critics have noted inconsistencies in findings among studies and even in independent re-analyses of data from the same city. In this paper we review some of the statistical methods used to analyze a subset of a national data base of air pollution, mortality and weather assembled during the National Morbidity and Mortality Air Pollution Study (NMMAPS).
Release date: 2000-03-02 - Surveys and statistical programs – Documentation: 11-522-X19990015658Description:
Radon, a naturally occurring gas found at some level in most homes, is an established risk factor for human lung cancer. The U.S. National Research Council (1999) has recently completed a comprehensive evaluation of the health risks of residential exposure to radon, and developed models for projecting radon lung cancer risks in the general population. This analysis suggests that radon may play a role in the etiology of 10-15% of all lung cancer cases in the United States, although these estimates are subject to considerable uncertainty. In this article, we present a partial analysis of uncertainty and variability in estimates of lung cancer risk due to residential exposure to radon in the United States using a general framework for the analysis of uncertainty and variability that we have developed previously. Specifically, we focus on estimates of the age-specific excess relative risk (ERR) and lifetime relative risk (LRR), both of which vary substantially among individuals.
Release date: 2000-03-02 - 38. Overview of record linkage ArchivedSurveys and statistical programs – Documentation: 11-522-X19990015660Description:
There are many different situations in which one or more files need to be linked. With one file the purpose of the linkage would be to locate duplicates within the file. When there are two files, the linkage is done to identify the units that are the same on both files and thus create matched pairs. Often records that need to be linked do not have a unique identifier. Hierarchical record linkage, probabilistic record linkage and statistical matching are three methods that can be used when there is no unique identifier on the files that need to be linked. We describe the major differences between the methods. We consider how to choose variables to link, how to prepare files for linkage and how the links are identified. As well, we review tips and tricks used when linking files. Two examples, the probabilistic record linkage used in the reverse record check and the hierarchical record linkage of the Business Number (BN) master file to the Statistical Universe File (SUF) of unincorporated tax filers (T1) will be illustrated.
Release date: 2000-03-02 - 39. A comparison of two record linkage procedures ArchivedSurveys and statistical programs – Documentation: 11-522-X19990015664Description:
Much work on probabilistic methods of linkage can be found in the statistical literature. However, although many groups undoubtedly still use deterministic procedures, not much literature is available on these strategies. Furthermore there appears to exist no documentation on the comparison of results for the two strategies. Such a comparison is pertinent in the situation where we have only non-unique identifiers like names, sex, race etc. as common identifiers on which the databases are to be linked. In this work we compare a stepwise deterministic linkage strategy with the probabilistic strategy, as implemented in AUTOMATCH, for such a situation. The comparison was carried out on a linkage between medical records from the Regional Perinatal Intensive Care Centers database and education records from the Florida Department of Education. Social security numbers, available in both databases, were used to decide the true status of the record pair after matching. Match rates and error rates for the two strategies are compared and a discussion of their similarities and differences, strengths and weaknesses is presented.
Release date: 2000-03-02 - 40. An evaluation of data fusion techniques ArchivedSurveys and statistical programs – Documentation: 11-522-X19990015666Description:
The fusion sample obtained by a statistical matching process can be considered a sample out of an artificial population. The distribution of this artificial population is derived. If the correlation between specific variables is the only focus the strong demand for conditional independence can be weakened. In a simulation study the effects of violations of some assumptions leading to the distribution of the artificial population are examined. Finally some ideas concerning the establishing of the claimed conditional independence by latent class analysis are presented.
Release date: 2000-03-02
Data (0)
Data (0) (0 results)
No content available at this time.
Analysis (38)
Analysis (38) (0 to 10 of 38 results)
- Articles and reports: 82-003-X20000015300Geography: CanadaDescription:
This article examines the extent of proxy reporting in the Natiional Population Health (NPHS). It also explores associations between proxy reporting status and the prevalence of selected health problems, and investigates the relationship between changes in proxy reporting status and two-year incidence of health problems.
Release date: 2000-10-20 - Articles and reports: 88-003-X20000035768Geography: CanadaDescription:
Why do innovation surveys produce radically different estimates of the number of R&D performers than R&D surveys? The factors contributing to divergence are presented with detail on selected contributors.
Release date: 2000-10-06 - Articles and reports: 75F0002M2000006Description:
This paper discusses methods and tools considered and used to produce cross-sectional estimates based on the combination of two longitudinal panels for the Survey of Labour and Income Dynamics (SLID).
Release date: 2000-10-05 - Articles and reports: 62F0026M2000004Description:
The Survey of Household Spending (SHS), which replaced the periodic Family Expenditure Survey (FAMEX) in 1997, is an annual survey that collects detailed expenditure information from households for a given calendar year. Due to the heavy response burden placed on respondents of this survey, it was decided for the 1997 survey to perform a test of incentive effect on response rates. Two incentives were used: a one-year subscription to the Statistics Canada publication Canadian Social Trends and a telephone calling card. The response rate data was analysed using Fisher's exact test and some non-parametric methods. After controlling for a discovered interviewer assignment effect, it was found that there was some evidence of a telephone card effect in the western and eastern most regions of Canada, while there was no evidence of any effect for the magazine. These findings were somewhat corroborated by a separate study testing effects of incentives on respondent relations. All these results will be discussed in this paper.
Release date: 2000-08-31 - Articles and reports: 75F0002M2000004Description:
This paper describes the methodology for the longitudinal and cross-sectional weights produced by the Survey of Labour and Income Dynamics (SLID). It also presents problems the survey has encountered and proposed solutions.
Release date: 2000-08-31 - 6. In this issue (Vol. 26, no. 1) ArchivedArticles and reports: 12-001-X200000110774Description:
In this Issue is a column where the Editor biefly presents each paper of the current issue of Survey Methodology. As well, it sometimes contain informations on structure or management changes in the journal.
Release date: 2000-08-30 - Articles and reports: 12-001-X20000015173Description:
In recognition of Survey Methodology's silver anniversary, this paper reviews the major advances in survey research that have taken place in the past 25 years. It provides a gneneral overview of developments in: the survey research profession; survey methodology - questionnaire design, data collection methods, handling missing data, survey sampling, and total survey error; and survey applications - panel surveys, international surveys, and secondary analysis. It also attempts to forecast some future developments in these areas.
Release date: 2000-08-30 - 8. Survey sampling theory over the twentieth century and its relation to computing technology ArchivedArticles and reports: 12-001-X20000015174Description:
Computation is an integral part of statistical analysis in general and survey sampling in particular. What kinds of analyses can be carried out will depend upon what kind of computational power is available. The general development of sampling theory is traced in connection with technological developments in computation.
Release date: 2000-08-30 - 9. The past is prologue ArchivedArticles and reports: 12-001-X20000015175Description:
Mahalanobis provided an example of how to use statistics to enlighten and inform government policy makers. His pioneering work was used by the US Bureau of the Census to learn more about measurement errors in censuses and surveys. People have many misconceptions about censuses, among them who is to be counted and where. Errors in the census do occur, among them errors in coverage. Over the years, the US Bureau of the Census has developed statistical techniques, including sampling in the census, to increase accuracy and reduce response burden.
Release date: 2000-08-30 - 10. Estimation of census adjustment factors ArchivedArticles and reports: 12-001-X20000015176Description:
A components-of-variance approach and an estimated covariance error structure were used in constructing predictors of adjustment factors for the 1990 Decennial Census. The variability of the estimated covariance matrix is the suspected cause of certain anomalies that appeared in the regression estimation and in the estimated adjustment factors. We investigate alternative prediction methods and propose a procedure that is less influenced by variability in the estimated covariance matrix. The proposed methodology is applied to a data set composed of 336 adjustment factors from the 1990 Post Enumeration Survey.
Release date: 2000-08-30
Reference (31)
Reference (31) (30 to 40 of 31 results)
- Surveys and statistical programs – Documentation: 21-601-M1998034Description:
This paper describes the experiences, the issues and the expectations of the many different players involved in the implementation of document imaging for the Canadian Census of Agriculture.
Release date: 2000-01-13
- Date modified: