Filter results by

Search Help
Currently selected filters that can be removed

Keyword(s)

Year of publication

1 facets displayed. 1 facets selected.
Sort Help
entries

Results

All (23)

All (23) (20 to 30 of 23 results)

  • Articles and reports: 12-001-X198800114602
    Description:

    For a given level of precision, Hidiroglou (1986) provided an algorithm for dividing the population into a take-all stratum and a take-some stratum so as to minimize the overall sample size assuming simple random sampling without replacement in the take-some stratum. Sethi (1963) provided an algorithm for optimum stratification of the population into a number of take-some strata. For the stratification of a highly skewed population, this article presents an iterative algorithm which has as objective the determination of stratification boundaries which split the population into a take-all stratum and a number of take-some strata. These boundaries are computed so as to minimize the resulting sample size given a level of relative precision, simple random sampling without replacement from the take-some strata and use of a power allocation among the take-some strata. The resulting algorithm is a combination of the procedures of Hidiroglou (1986) and Sethi (1963).

    Release date: 1988-06-15

  • Articles and reports: 12-001-X198800114603
    Description:

    Most surveys have many purposes and a hierarchy of six levels is proposed here. Yet most theory and textbooks are based on unipurpose theory, in order to avoid the complexity and conflicts of multipurpose designs. Ten areas of conflict between purposes are shown, then problems and solutions are advanced for each. Compromises and joint solutions fortunately are feasible, because most optima are very flat; also because most “requirements” for precision are actually very flexible. To state and to face the many purposes are preferable to the common practice of hiding behind some artificially picked single purpose; and they have also become more feasible with modern computers.

    Release date: 1988-06-15

  • Articles and reports: 12-001-X198800114604
    Description:

    In spite of·the comparative ease with which studies of error in foreign trade statistics could be conducted, there are few attempts to quantify their size, origin, distribution, and change over time. Policy makers and trade negotiators have little notion of how uncertain these statistics are in spite of their great detail. This paper takes advantage of a World Trade Database developed by Statistics Canada to examine and quantify discrepancies in existing foreign trade statistics.

    Release date: 1988-06-15
Stats in brief (0)

Stats in brief (0) (0 results)

No content available at this time.

Articles and reports (23)

Articles and reports (23) (0 to 10 of 23 results)

  • Articles and reports: 12-001-X198800214582
    Description:

    A comprehensive bibliography of books, research reports and published papers, dealing with the theory, application and development of randomized response techniques, includes a subject classification.

    Release date: 1988-12-15

  • Articles and reports: 12-001-X198800214583
    Description:

    This note portrays SQL, highlighting its strengths and weaknesses.

    Release date: 1988-12-15

  • Articles and reports: 12-001-X198800214584
    Description:

    When we examine postal addresses as they might appear in an administrative file, we discover a complex syntax, a lack of standards, various ambiguities and many errors. Therefore, postal addresses represent a real challenge to any computer system using them. PAAS (Postal Address Analysis System) is currently under development at Statistics Canada and aims to replace an aging routine used throughout the Bureau to decode postal addresses. PAAS will provide a means by which computer applications will obtain the address components, the standardized version of these components and the corresponding Address Search Key (ASK).

    Release date: 1988-12-15

  • Articles and reports: 12-001-X198800214585
    Description:

    The methods used to control the quality of Statistics Canada’s survey processing operations generally involve acceptance sampling by attributes with rectifying inspection, contained within the broader framework of Acceptance Control. Although these methods are recognized as good corrective procedures, they do little in themselves to prevent errors from recurring. As this is of the utmost importance in any quality program, the Quality Control Processing System (QCPS) has been designed with error prevention as one of its primary focuses. Accordingly, the system produces feedback reports and graphs for operators, supervisors and managers involved in the various operations. The system also produces information concerning changes in the inspection environments which enable methodologists to adjust inspection plans/procedures in accordance with the strategy of Acceptance Control. This paper highlights the main tabulation and estimation features of the QCPS and the manner in which it serves to support the principal quality control programs at Statistics Canada. Major capabilities from a methodological and systems perspective are discussed.

    Release date: 1988-12-15

  • Articles and reports: 12-001-X198800214586
    Description:

    A generalized implementation of a method for performing automated coding is described. Traditionally, coding has been performed manually by specially trained personnel, but recently computerized systems have appeared which either eliminate or substantially reduce the need for manual coding. Typically, such systems are limited in use to those applications for which they were originally designed. The system presented here may be used by any application to perform coding of English or French text using any classification scheme.

    Release date: 1988-12-15

  • Articles and reports: 12-001-X198800214587
    Description:

    The QUID system, which was designed and developed by INSEE (Paris) Institut National de la Statistique et des Études Économiques- National Statistics and Economic Studies Institute, is an automatic coding system for survey data collected in the form of literal headings expressed in the terminology of the respondent. The system hinges on the use of a very wide knowledge base made up of real phrases coded by experts. This study deals primarily with the preliminary automatic standardization processing of the phrases, and then with the algorithm used to organize the phrase base into an optimized tree pattern. A sorting example is provided in the form of an illustration. At present, the processing of additional coding variables used to complement the information contained in the phrases presents certain difficulties, and these will be examined in detail. The QUID 2 project, an updated version of the system, will be discussed briefly.

    Release date: 1988-12-15

  • Articles and reports: 12-001-X198800214588
    Description:

    Suppose that undercount rates in a census have been estimated and that block-level estimates of the undercount have been computed. It may then be desirable to create a new roster of households incorporating the estimated omissions. It is proposed here that such a roster be created by weighting the enumerated households. The household weights are constrained by linear equations representing the desired total counts of persons in each estimation class and the desired total count of households. Weights are then calculated that satisfy the constraints while making the fitted table as close as possible to the raw data. The procedure may be regarded as an extension of the standard “raking” methodology to situations where the constraints do not refer to the margins of a contingency table. Continuous as well as discrete covariates may be used in the adjustment, and it is possible to check directly whether the constraints can be satisfied. Methods are proposed for the use of weighted data for various Census purposes, and for adjustment of covariate information on characteristics of omitted households, such as income, that are not directly considered in undercount estimation.

    Release date: 1988-12-15

  • Articles and reports: 12-001-X198800214589
    Description:

    The U.S. Bureau of the Census uses dual system estimates (DSEs) for measuring census coverage error. The dual system estimate uses data from the original enumeration and a Post Enumeration Survey. In measuring the accuracy of the DSE, it is important to know that the DSE is subject to several components of nonsampling error, as well as sampling error. This paper gives models of the total error and the components of error in the dual system estimates. The models relate observed indicators of data quality, such as a matching error rate, to the first two moments of the components of error. The propagation of error in the DSE is studied and its bias and variance are assessed. The methodology is applied to the 1986 Census of Central Los Angeles County in the Census Bureau’s Test of Adjustment Related Operations. The methodology also will be useful to assess error in the DSE for the 1990 census as well as other applications.

    Release date: 1988-12-15

  • Articles and reports: 12-001-X198800214590
    Description:

    This paper presents results from a study of the causes of census undercount for a hard-to-enumerate, largely Hispanic urban area. A framework for organizing the causes of undercount is offered, and various hypotheses about these causes are tested. The approach is distinctive for its attempt to quantify the sources of undercount and isolate problems of unique importance by controlling for other problems statistically.

    Release date: 1988-12-15

  • Articles and reports: 12-001-X198800214591
    Description:

    To estimate census undercount, a post-enumeration survey (PES) is taken, and an attempt is made to find a matching census record for each individual in the PES; the rate of successful matching provides an estimate of census coverage. Undercount estimation is performed within poststrata defined by geographic, demographic, and housing characteristics, X. Portions of X are missing for some individuals due to survey nonresponse; moreover, a match status Y cannot be determined for all individuals. A procedure is needed for imputing the missing values of X and Y. This paper reviews the imputation methods used in the 1986 Test of Adjustment Related Operations (Schenker 1988) and proposes two alternative model-based methods: (1) a maximum-likelihood contingency-table estimation procedure that ignores the missing-data mechanism; and (2) a new Bayesian contingency table estimation procedure that does not ignore the missing-data mechanism. The first method is computationally simpler, but the second is preferred on conceptual and scientific grounds.

    Release date: 1988-12-15
Journals and periodicals (0)

Journals and periodicals (0) (0 results)

No content available at this time.

Date modified: