Keyword search
Filter results by
Search HelpKeyword(s)
Subject
- Selected: Statistical methods (21)
- Administrative data (5)
- Collection and questionnaires (1)
- Data analysis (2)
- Frames and coverage (1)
- Inference and foundations (1)
- Quality assurance (3)
- Response and nonresponse (1)
- Statistical techniques (2)
- Survey design (2)
- Time series (1)
- Weighting and estimation (8)
- Other content related to Statistical methods (1)
Results
All (21)
All (21) (0 to 10 of 21 results)
- 1. Unemployment: A tale of two sources ArchivedArticles and reports: 75-001-X19890042288Geography: CanadaDescription:
Unemployment estimates from the Labour Force Survey, source of the official unemployment rate, are quite different from counts of the number of Unemployment Insurance beneficiaries. This piece reviews the conceptual differences between the two data sources and quantifies many of the factors that create the discrepancies.
Release date: 1989-12-20 - Articles and reports: 12-001-X198900214562Description:
This paper presents a technique for developing appropriate confidence intervals around postcensal population estimates using a modification of the ratio-correlation method termed the rank-order procedure. It is shown that the Wilcoxon test can be used to decide if a given ratio-correlation model is stable over time. If stability is indicated, then the confidence intervals associated with the data used in model construction are appropriate for postcensal estimates. If stability is not indicated, the confidence intervals associated with the data used in model construction are not appropriate, and, moreover, likely to overstate the precision of postcensal estimates. Given instability, it is shown that confidence intervals appropriate for postcensal estimates can be derived using the rank-order procedure. An empirical example is provided using county population estimates for Washington state.
Release date: 1989-12-15 - Articles and reports: 12-001-X198900214563Description:
This paper examines the adequacy of estimates of emigrants from Canada and interprovincial migration data from the Family Allowance files and Revenue Canada tax files. The application of these data files in estimating total population for Canada, provinces and territories, was evaluated with reference to the 1986 Census counts. It was found that these two administrative files provided consistent and reasonably accurate series of data on emigration and interprovincial migration from 1981 to 1986. Consequently, the population estimates were fairly accurate. The estimate of emigrants derived from the Family Allowance file could be improved by using the ratio of adult to child emigrant rates computed from Employment and Immigration Canada’s immigration file.
Release date: 1989-12-15 - 4. Updating size measures in a probabilities proportional to size without replacement (PPSWOR) design ArchivedArticles and reports: 12-001-X198900214564Description:
It is sometimes required that a probabilities proportional to size without replacement (PPSWOR) sample of first stage units (psu’s) in a multistage population survey design be updated to take account of new size measures that have become available for the whole population of such units. However, because of a considerable investment in within-psu mapping, segmentation, listing, enumerator recruitment, etc., we would like to retain the same sample psu’s if possible, consistent with the requirement that selection probabilities may now be regarded as being proportional to the new size measures. The method described in this article differs from methods already described in the literature in that it is valid for any sample size and does not require enumeration of all possible samples. Further, it does not require that the old and the new sampling methods be the same and hence it provides a convenient way not only of updating size measures but also of switching to a new sampling method.
Release date: 1989-12-15 - Articles and reports: 12-001-X198900214565Description:
Empirical Bayes techniques are applied to the problem of “small area” estimation of proportions. Such methods have been previously used to advantage in a variety of situations, as described, for example, by Morris (1983). The basic idea here consists of incorporating random effects and nested random effects into models which reflect the complex structure of a multi-stage sample design, as was originally proposed by Dempster and Tomberlin (1980). Estimates of proportions can be obtained, together with associated estimates of uncertainty. These techniques are applied to simulated data in a Monte Carlo study which compares several available techniques for small area estimation.
Release date: 1989-12-15 - Articles and reports: 12-001-X198900214566Description:
A randomized response model for sampling from dichotomous populations is developed in this paper. The model permits the use of continuous randomization and multiple trials per respondent. The special case of randomization with normal distributions is considered, and a computer simulation of such a sampling procedure is presented as an initial exploration into the effects such a scheme has on the amount of information in the sample. A portable electronic device is discussed which would implement the presented model. The results of a study taken, using the electronic randomizing device, is presented. The results show that randomized response sampling is a superior technique to direct questioning for at least some sensitive questions.
Release date: 1989-12-15 - 7. Logistic regression under complex survey designs ArchivedArticles and reports: 12-001-X198900214567Description:
Estimation procedures for obtaining consistent estimators of the parameters of a generalized logistic function and of its asymptotic covariance matrix under complex survey designs are presented. A correction in the Taylor estimator of the covariance matrix is made to produce a positive definite covariance matrix. The correction also reduces the small sample bias. The estimation procedure is first presented for cluster sampling and then extended to more complex situations. A Monte Carlo study is conducted to examine the small sample properties of F-tests constructed from alternative covariance matrices. The maximum likelihood estimation method where the survey design is completely ignored is compared with the usual Taylor’s series expansion method and with the modified Taylor procedure.
Release date: 1989-12-15 - Articles and reports: 12-001-X198900214568Description:
The paper describes a Monte Carlo study of simultaneous confidence interval procedures for k > 2 proportions, under a model of two-stage cluster sampling. The procedures investigated include: (i) standard multinomial intervals; (ii) Scheffé intervals based on sample estimates of the variances of cell proportions; (iii) Quesenberry-Hurst intervals adapted for clustered data using Rao and Scott’s first and second order adjustments to X^2; (iv) simple Bonferroni intervals; (v) Bonferroni intervals based on transformations of the estimated proportions; (vi) Bonferroni intervals computed using the critical points of Student’s t. In several realistic situations, actual coverage rates of the multinomial procedures were found to be seriously depressed compared to the nominal rate. The best performing intervals, from the point of view of coverage rates and coverage symmetry (an extension of an idea due to Jennings), were the t-based Bonferroni intervals derived using log and logit transformations. Of the Scheffé-like procedures, the best performance was provided by Quesenberry-Hurst intervals in combination with first-order Rao-Scott adjustments.
Release date: 1989-12-15 - 9. Analysis of sample survey data involving categorical response variables: Methods and software ArchivedArticles and reports: 12-001-X198900214569Description:
During the past 10 years or so, rapid progress has been made in the development of statistical methods of analysing survey data that take account of the complexity of survey design. This progress has been particularly evident in the analysis of cross-classified count data. Developments in this area have included weighted least squares estimation of generalized linear models and associated Wald tests of goodness of fit and subhypotheses, corrections to standard chi-squared or likelihood ratio tests under loglinear models or logistic regression models involving a binary response variable, and jackknifed chisquared tests. This paper illustrates the use of various extensions of these methods on data from complex surveys. The method of Scott, Rao and Thomas (1989) for weighted regression involving singular covariance matrices is applied to data from the Canada Health Survey (1978-79). Methods for logistic regression models are extended to Box-Cox models involving power transformations of cell odds ratios, and their use is illustrated on data from the Canadian Labour Force Survey. Methods for testing equality of parameters in two logistic regression models, corresponding to two time points, are applied to data from the Canadian Labour Force Survey. Finally, a general class of polytomous response models is studied, and corrected chi-squared tests are applied to data from the Canada Health Survey (1978-79). Software to implement these methods using the SAS facilities on a main frame computer is briefly described.
Release date: 1989-12-15 - 10. Job ads: A leading indicator? ArchivedArticles and reports: 75-001-X19890032282Geography: CanadaDescription:
The Help-wanted Index measures job ads as an indicator of labour demand. The index is considered a leading indicator of labour market conditions and of general economic activity. This study looks at the performance of the index during the last three business cycles.
Release date: 1989-09-30
Data (0)
Data (0) (0 results)
No content available at this time.
Analysis (21)
Analysis (21) (0 to 10 of 21 results)
- 1. Unemployment: A tale of two sources ArchivedArticles and reports: 75-001-X19890042288Geography: CanadaDescription:
Unemployment estimates from the Labour Force Survey, source of the official unemployment rate, are quite different from counts of the number of Unemployment Insurance beneficiaries. This piece reviews the conceptual differences between the two data sources and quantifies many of the factors that create the discrepancies.
Release date: 1989-12-20 - Articles and reports: 12-001-X198900214562Description:
This paper presents a technique for developing appropriate confidence intervals around postcensal population estimates using a modification of the ratio-correlation method termed the rank-order procedure. It is shown that the Wilcoxon test can be used to decide if a given ratio-correlation model is stable over time. If stability is indicated, then the confidence intervals associated with the data used in model construction are appropriate for postcensal estimates. If stability is not indicated, the confidence intervals associated with the data used in model construction are not appropriate, and, moreover, likely to overstate the precision of postcensal estimates. Given instability, it is shown that confidence intervals appropriate for postcensal estimates can be derived using the rank-order procedure. An empirical example is provided using county population estimates for Washington state.
Release date: 1989-12-15 - Articles and reports: 12-001-X198900214563Description:
This paper examines the adequacy of estimates of emigrants from Canada and interprovincial migration data from the Family Allowance files and Revenue Canada tax files. The application of these data files in estimating total population for Canada, provinces and territories, was evaluated with reference to the 1986 Census counts. It was found that these two administrative files provided consistent and reasonably accurate series of data on emigration and interprovincial migration from 1981 to 1986. Consequently, the population estimates were fairly accurate. The estimate of emigrants derived from the Family Allowance file could be improved by using the ratio of adult to child emigrant rates computed from Employment and Immigration Canada’s immigration file.
Release date: 1989-12-15 - 4. Updating size measures in a probabilities proportional to size without replacement (PPSWOR) design ArchivedArticles and reports: 12-001-X198900214564Description:
It is sometimes required that a probabilities proportional to size without replacement (PPSWOR) sample of first stage units (psu’s) in a multistage population survey design be updated to take account of new size measures that have become available for the whole population of such units. However, because of a considerable investment in within-psu mapping, segmentation, listing, enumerator recruitment, etc., we would like to retain the same sample psu’s if possible, consistent with the requirement that selection probabilities may now be regarded as being proportional to the new size measures. The method described in this article differs from methods already described in the literature in that it is valid for any sample size and does not require enumeration of all possible samples. Further, it does not require that the old and the new sampling methods be the same and hence it provides a convenient way not only of updating size measures but also of switching to a new sampling method.
Release date: 1989-12-15 - Articles and reports: 12-001-X198900214565Description:
Empirical Bayes techniques are applied to the problem of “small area” estimation of proportions. Such methods have been previously used to advantage in a variety of situations, as described, for example, by Morris (1983). The basic idea here consists of incorporating random effects and nested random effects into models which reflect the complex structure of a multi-stage sample design, as was originally proposed by Dempster and Tomberlin (1980). Estimates of proportions can be obtained, together with associated estimates of uncertainty. These techniques are applied to simulated data in a Monte Carlo study which compares several available techniques for small area estimation.
Release date: 1989-12-15 - Articles and reports: 12-001-X198900214566Description:
A randomized response model for sampling from dichotomous populations is developed in this paper. The model permits the use of continuous randomization and multiple trials per respondent. The special case of randomization with normal distributions is considered, and a computer simulation of such a sampling procedure is presented as an initial exploration into the effects such a scheme has on the amount of information in the sample. A portable electronic device is discussed which would implement the presented model. The results of a study taken, using the electronic randomizing device, is presented. The results show that randomized response sampling is a superior technique to direct questioning for at least some sensitive questions.
Release date: 1989-12-15 - 7. Logistic regression under complex survey designs ArchivedArticles and reports: 12-001-X198900214567Description:
Estimation procedures for obtaining consistent estimators of the parameters of a generalized logistic function and of its asymptotic covariance matrix under complex survey designs are presented. A correction in the Taylor estimator of the covariance matrix is made to produce a positive definite covariance matrix. The correction also reduces the small sample bias. The estimation procedure is first presented for cluster sampling and then extended to more complex situations. A Monte Carlo study is conducted to examine the small sample properties of F-tests constructed from alternative covariance matrices. The maximum likelihood estimation method where the survey design is completely ignored is compared with the usual Taylor’s series expansion method and with the modified Taylor procedure.
Release date: 1989-12-15 - Articles and reports: 12-001-X198900214568Description:
The paper describes a Monte Carlo study of simultaneous confidence interval procedures for k > 2 proportions, under a model of two-stage cluster sampling. The procedures investigated include: (i) standard multinomial intervals; (ii) Scheffé intervals based on sample estimates of the variances of cell proportions; (iii) Quesenberry-Hurst intervals adapted for clustered data using Rao and Scott’s first and second order adjustments to X^2; (iv) simple Bonferroni intervals; (v) Bonferroni intervals based on transformations of the estimated proportions; (vi) Bonferroni intervals computed using the critical points of Student’s t. In several realistic situations, actual coverage rates of the multinomial procedures were found to be seriously depressed compared to the nominal rate. The best performing intervals, from the point of view of coverage rates and coverage symmetry (an extension of an idea due to Jennings), were the t-based Bonferroni intervals derived using log and logit transformations. Of the Scheffé-like procedures, the best performance was provided by Quesenberry-Hurst intervals in combination with first-order Rao-Scott adjustments.
Release date: 1989-12-15 - 9. Analysis of sample survey data involving categorical response variables: Methods and software ArchivedArticles and reports: 12-001-X198900214569Description:
During the past 10 years or so, rapid progress has been made in the development of statistical methods of analysing survey data that take account of the complexity of survey design. This progress has been particularly evident in the analysis of cross-classified count data. Developments in this area have included weighted least squares estimation of generalized linear models and associated Wald tests of goodness of fit and subhypotheses, corrections to standard chi-squared or likelihood ratio tests under loglinear models or logistic regression models involving a binary response variable, and jackknifed chisquared tests. This paper illustrates the use of various extensions of these methods on data from complex surveys. The method of Scott, Rao and Thomas (1989) for weighted regression involving singular covariance matrices is applied to data from the Canada Health Survey (1978-79). Methods for logistic regression models are extended to Box-Cox models involving power transformations of cell odds ratios, and their use is illustrated on data from the Canadian Labour Force Survey. Methods for testing equality of parameters in two logistic regression models, corresponding to two time points, are applied to data from the Canadian Labour Force Survey. Finally, a general class of polytomous response models is studied, and corrected chi-squared tests are applied to data from the Canada Health Survey (1978-79). Software to implement these methods using the SAS facilities on a main frame computer is briefly described.
Release date: 1989-12-15 - 10. Job ads: A leading indicator? ArchivedArticles and reports: 75-001-X19890032282Geography: CanadaDescription:
The Help-wanted Index measures job ads as an indicator of labour demand. The index is considered a leading indicator of labour market conditions and of general economic activity. This study looks at the performance of the index during the last three business cycles.
Release date: 1989-09-30
Reference (0)
Reference (0) (0 results)
No content available at this time.
- Date modified: